id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
16,796,643 | https://en.wikipedia.org/wiki/Domain%20%28mathematical%20analysis%29 | In mathematical analysis, a domain or region is a non-empty, connected, and open set in a topological space. In particular, it is any non-empty connected open subset of the real coordinate space or the complex coordinate space . A connected open subset of coordinate space is frequently used for the domain of a function.
The basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between German, French, and English works. In English, some authors use the term domain, some use the term region, some use both terms interchangeably, and some define the two terms slightly differently; some avoid ambiguity by sticking with a phrase such as non-empty connected open subset.
Conventions
One common convention is to define a domain as a connected open set but a region as the union of a domain with none, some, or all of its limit points. A closed region or closed domain is the union of a domain and all of its limit points.
Various degrees of smoothness of the boundary of the domain are required for various properties of functions defined on the domain to hold, such as integral theorems (Green's theorem, Stokes theorem), properties of Sobolev spaces, and to define measures on the boundary and spaces of traces (generalized functions defined on the boundary). Commonly considered types of domains are domains with continuous boundary, Lipschitz boundary, boundary, and so forth.
A bounded domain is a domain that is bounded, i.e., contained in some ball. Bounded region is defined similarly. An exterior domain or external domain is a domain whose complement is bounded; sometimes smoothness conditions are imposed on its boundary.
In complex analysis, a complex domain (or simply domain) is any connected open subset of the complex plane . For example, the entire complex plane is a domain, as is the open unit disk, the open upper half-plane, and so forth. Often, a complex domain serves as the domain of definition for a holomorphic function. In the study of several complex variables, the definition of a domain is extended to include any connected open subset of .
In Euclidean spaces, one-, two-, and three-dimensional regions are curves, surfaces, and solids, whose extent are called, respectively, length, area, and volume.
Historical notes
According to Hans Hahn, the concept of a domain as an open connected set was introduced by Constantin Carathéodory in his famous book .
In this definition, Carathéodory considers obviously non-empty disjoint sets.
Hahn also remarks that the word "Gebiet" ("Domain") was occasionally previously used as a synonym of open set. The rough concept is older. In the 19th and early 20th century, the terms domain and region were often used informally (sometimes interchangeably) without explicit definition.
However, the term "domain" was occasionally used to identify closely related but slightly different concepts. For example, in his influential monographs on elliptic partial differential equations, Carlo Miranda uses the term "region" to identify an open connected set, and reserves the term "domain" to identify an internally connected, perfect set, each point of which is an accumulation point of interior points, following his former master Mauro Picone: according to this convention, if a set is a region then its closure is a domain.
See also
Notes
References
Reprinted 1968 (Chelsea).
English translation of
English translation of
Translated as
English translation of
Mathematical analysis
Partial differential equations
Topology | Domain (mathematical analysis) | [
"Physics",
"Mathematics"
] | 727 | [
"Mathematical analysis",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
16,801,419 | https://en.wikipedia.org/wiki/Spheroidal%20wave%20function | Spheroidal wave functions are solutions of the Helmholtz equation that are found by writing the equation in spheroidal coordinates and applying the technique of separation of variables, just like the use of spherical coordinates lead to spherical harmonics. They are called oblate spheroidal wave functions if oblate spheroidal coordinates are used and prolate spheroidal wave functions if prolate spheroidal coordinates are used.
If instead of the Helmholtz equation, the Laplace equation is solved in spheroidal coordinates using the method of separation of variables, the spheroidal wave functions reduce to the spheroidal harmonics. With oblate spheroidal coordinates, the solutions
are called oblate harmonics and with prolate spheroidal coordinates, prolate harmonics. Both type of spheroidal harmonics
are expressible in terms of Legendre functions.
See also
Oblate spheroidal coordinates, especially the section Oblate spheroidal harmonics, for a more extensive discussion.
Oblate spheroidal wave function
References
Notes
Bibliography
C. Niven On the Conduction of Heat in Ellipsoids of Revolution. Philosophical transactions of the Royal Society of London, v. 171 p. 117 (1880)
M. Abramowitz and I. Stegun, Handbook of Mathematical function (US Gov. Printing Office, Washington DC, 1964)
Partial differential equations
Special functions | Spheroidal wave function | [
"Mathematics"
] | 297 | [
"Special functions",
"Applied mathematics",
"Applied mathematics stubs",
"Combinatorics"
] |
16,801,509 | https://en.wikipedia.org/wiki/Nonstandard%20finite%20difference%20scheme | Nonstandard finite difference schemes is a general set of methods in numerical analysis that gives numerical solutions to differential equations by constructing a discrete model. The general rules for such schemes are not precisely known.
Overview
A finite difference (FD) model of a differential equation (DE) can be formed by simply replacing the derivatives with FD approximations. But this is a naive "translation." If
we literally translate from English to Japanese by making a one-to-one correspondence between words, the original meaning is often lost. Similarly the naive FD model of a DE can be very different from the original DE, because the FD model is a difference equation with solutions that may be quite different from solutions of the DE. For a more technical definition see Mickens 2000.
A nonstandard (NS) finite difference model, is a free and more accurate "translation" of a differential equation. For example, a parameter (call it v) in the DE may take another value u in the NS-FD model.
Example
As an example let us model the wave equation,
The naive finite difference model, which we now call the standard (S) FD model is found by approximating the derivatives with FD approximations. The central second order FD approximation of the first derivative is
Applying the above FD approximation to , we can derive the FD approximation for ,
where we have introduced the shortcut for simplicity such that which can be check by applying on twice.
Approximating both derivatives in the wave equation, leads to the S-FD model,
If you insert the solution of the wave equation (with )into the S-FD model you find that
In general because the solution of the FD approximation to the wave equation is not the same as the wave equation itself.
To construct a NS-FD model which has the same solution as the wave equation, put a free parameter, call it u, in place of and try to find a value of u which makes .
It turns out that this value of u is
Thus an exact nonstandard finite difference model of the wave equation is
Further details and extensions of to two and three dimensions as well as to Maxwell's equations can be found in Cole 2002.
References
Numerical analysis | Nonstandard finite difference scheme | [
"Mathematics"
] | 455 | [
"Mathematical relations",
"Computational mathematics",
"Approximations",
"Numerical analysis"
] |
9,962,898 | https://en.wikipedia.org/wiki/Thermogenic%20plant | Thermogenic plants have the ability to raise their temperature above that of the surrounding air. Heat is generated in the mitochondria, as a secondary process of cellular respiration called thermogenesis. Alternative oxidase and uncoupling proteins similar to those found in mammals enable the process, which is still poorly understood.
The role of thermogenesis
Botanists are not completely sure why thermogenic plants generate large amounts of excess heat, but most agree that it has something to do with increasing pollination rates. The most widely accepted theory states that the endogenous heat helps in spreading chemicals that attract pollinators to the plant. For example, the Voodoo lily uses heat to help spread its smell of rotting meat. This smell draws in flies which begin to search for the source of the smell. As they search the entire plant for the dead carcass, they pollinate the plant.
Other theories state that the heat may provide a heat reward for the pollinator: pollinators are drawn to the flower for its warmth. This theory has less support because most thermogenic plants are found in tropical climates.
Yet another theory is that the heat helps protect against frost damage, allowing the plant to germinate and sprout earlier than otherwise. For example, the skunk cabbage generates heat, which allows it to melt its way through a layer of snow in early spring. The heat, however, is mostly used to help spread its pungent odor and attract pollinators.
Characteristics of thermogenic plants
Most thermogenic plants tend to be rather large. This is because the smaller plants do not have enough volume to create a considerable amount of heat. Large plants, on the other hand, have a lot of mass to create and retain heat.
Thermogenic plants are also protogynous, meaning that the female part of the plant matures before the male part of the same plant. This reduces inbreeding considerably, as such a plant can be fertilized only by pollen from a different plant. This is why thermogenic plants release pungent odors to attract pollinating insects.
Examples
Thermogenic plants are found in a variety of families, but Araceae in particular contains many such species. Examples from this family include the dead-horse arum (Helicodiceros muscivorus), the eastern skunk cabbage (Symplocarpus foetidus), the elephant foot yam (Amorphophallus paeoniifolius), elephant ear (Philodendron selloum), lords-and-ladies (Arum maculatum), and voodoo lily (Typhonium venosum). The titan arum (Amorphophallus titanum) uses thermogenically created water vapor to disperse its scent—that of rotting meat—above the cold air that settles over it at night in its natural habitat. Contrary to popular belief, the western skunk cabbage (Lysichiton americanus), a close relative from the family Araceae, is not thermogenic. Outside Araceae, the sacred lotus (Nelumbo nucifera) is thermogenic and endothermic, able to regulate its flower temperature to a certain range, an ability shared by at least one species in the non-photosynthetic parasitic genus Rhizanthes, Rhizanthes lowii.
Heat production
Many endothermic plant species rely on alternative oxidase (AOX), which is an enzyme in the mitochondria organelle and is a part of the electron transport chain. The reduction of mitochondrial redox potential by alternative oxidase increases unproductive respiration. This metabolic process creates an excess of heat which warms thermogenic tissue or organs. Plants containing this alternative oxidase are unaffected by the effects of cyanide because AOX acts as electron acceptor collecting electrons from ubiquinol while bypassing the third electron complex. The AOX enzyme then reduces oxygen molecules to water without the presence of a proton gradient which in turn is very inefficient yielding a drop in free energy from Ubiquinol to oxygen which is released in heat.
References
Heat transfer
Plant physiology
Plants by adaptation | Thermogenic plant | [
"Physics",
"Chemistry",
"Biology"
] | 867 | [
"Transport phenomena",
"Plant physiology",
"Physical phenomena",
"Heat transfer",
"Plants",
"Organisms by adaptation",
"Thermodynamics",
"Plants by adaptation"
] |
9,966,817 | https://en.wikipedia.org/wiki/Modes%20of%20convergence | In mathematics, there are many senses in which a sequence or a series is said to be convergent. This article describes various modes (senses or species) of convergence in the settings where they are defined. For a list of modes of convergence, see Modes of convergence (annotated index)
Each of the following objects is a special case of the types preceding it: sets, topological spaces, uniform spaces, topological abelian group, normed spaces, Euclidean spaces, and the real/complex numbers. Also, any metric space is a uniform space.
Elements of a topological space
Convergence can be defined in terms of sequences in first-countable spaces. Nets are a generalization of sequences that are useful in spaces which are not first countable. Filters further generalize the concept of convergence.
In metric spaces, one can define Cauchy sequences. Cauchy nets and filters are generalizations to uniform spaces. Even more generally, Cauchy spaces are spaces in which Cauchy filters may be defined. Convergence implies "Cauchy convergence", and Cauchy convergence, together with the existence of a convergent subsequence implies convergence. The concept of completeness of metric spaces, and its generalizations is defined in terms of Cauchy sequences.
Series of elements in a topological abelian group
In a topological abelian group, convergence of a series is defined as convergence of the sequence of partial sums. An important concept when considering series is unconditional convergence, which guarantees that the limit of the series is invariant under permutations of the summands.
In a normed vector space, one can define absolute convergence as convergence of the series (). Absolute convergence implies Cauchy convergence of the sequence of partial sums (by the triangle inequality), which in turn implies absolute convergence of some grouping (not reordering). The sequence of partial sums obtained by grouping is a subsequence of the partial sums of the original series. The convergence of each absolutely convergent series is an equivalent condition for a normed vector space to be Banach (i.e.: complete).
Absolute convergence and convergence together imply unconditional convergence, but unconditional convergence does not imply absolute convergence in general, even if the space is Banach, although the implication holds in .
Convergence of sequence of functions on a topological space
The most basic type of convergence for a sequence of functions (in particular, it does not assume any topological structure on the domain of the functions) is pointwise convergence. It is defined as convergence of the sequence of values of the functions at every point. If the functions take their values in a uniform space, then one can define pointwise Cauchy convergence, uniform convergence, and uniform Cauchy convergence of the sequence.
Pointwise convergence implies pointwise Cauchy convergence, and the converse holds if the space in which the functions take their values is complete. Uniform convergence implies pointwise convergence and uniform Cauchy convergence. Uniform Cauchy convergence and pointwise convergence of a subsequence imply uniform convergence of the sequence, and if the codomain is complete, then uniform Cauchy convergence implies uniform convergence.
If the domain of the functions is a topological space and the codomain is a uniform space, local uniform convergence (i.e. uniform convergence on a neighborhood of each point) and compact (uniform) convergence (i.e. uniform convergence on all compact subsets) may be defined. "Compact convergence" is always short for "compact uniform convergence," since "compact pointwise convergence" would mean the same thing as "pointwise convergence" (points are always compact).
Uniform convergence implies both local uniform convergence and compact convergence, since both are local notions while uniform convergence is global. If X is locally compact (even in the weakest sense: every point has compact neighborhood), then local uniform convergence is equivalent to compact (uniform) convergence. Roughly speaking, this is because "local" and "compact" connote the same thing.
Series of functions on a topological abelian group
Pointwise and uniform convergence of series of functions are defined in terms of convergence of the sequence of partial sums.
For functions taking values in a normed linear space, absolute convergence refers to convergence of the series of positive, real-valued functions . "Pointwise absolute convergence" is then simply pointwise convergence of .
Normal convergence is convergence of the series of non-negative real numbers obtained by taking the uniform (i.e. "sup") norm of each function in the series (uniform convergence of ). In Banach spaces, pointwise absolute convergence implies pointwise convergence, and normal convergence implies uniform convergence.
For functions defined on a topological space, one can define (as above) local uniform convergence and compact (uniform) convergence in terms of the partial sums of the series. If, in addition, the functions take values in a normed linear space, then local normal convergence (local, uniform, absolute convergence) and compact normal convergence (absolute convergence on compact sets) can be defined.
Normal convergence implies both local normal convergence and compact normal convergence. And if the domain is locally compact (even in the weakest sense), then local normal convergence implies compact normal convergence.
Functions defined on a measure space
If one considers sequences of measurable functions, then several modes of convergence that depend on measure-theoretic, rather than solely topological properties, arise. This includes pointwise convergence almost-everywhere, convergence in p-mean and convergence in measure. These are of particular interest in probability theory.
See also
Topology
Convergence (mathematics) | Modes of convergence | [
"Physics",
"Mathematics"
] | 1,140 | [
"Sequences and series",
"Functions and mappings",
"Convergence (mathematics)",
"Mathematical structures",
"Mathematical objects",
"Topology",
"Mathematical relations",
"Space",
"Geometry",
"Spacetime"
] |
9,970,499 | https://en.wikipedia.org/wiki/Este | Este may refer to:
Geography
Este (woreda), a district in Ethiopia
Este, Veneto, a town in Italy
Este (Málaga), a district in Spain
Este (river), a river in Germany
Este (São Pedro), a parish in Portugal
Este (São Mamede), a parish in Portugal
People
House of Este, a European dynasty
Dukes of Ferrara and of Modena, the Italian family of Este
Este culture, a proto-historic culture existed from the late Italian Bronze Age
Aquiles Este (born 1962), American semiotician
Charles Este (1696–1745), bishop of Ossory and Waterford and Lismore
Florence Esté (1860–1926), American painter
Este Haim (born 1986), American musician
Other uses
A.C. Este, an association football club based in Este, Veneto
Estë, a fictional character in J. R. R. Tolkien's legendarium
See also
East (disambiguation)
Estes, a surname
Surnames of Italian origin
Orientation (geometry) | Este | [
"Physics",
"Mathematics"
] | 217 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
4,443,719 | https://en.wikipedia.org/wiki/Beam%20parameter%20product | In laser science, the beam parameter product (BPP) is the product of a laser beam's divergence angle (half-angle) and the radius of the beam at its narrowest point (the beam waist). The BPP quantifies the quality of a laser beam, and how well it can be focused to a small spot.
A Gaussian beam has the lowest possible BPP, , where is the wavelength of the light. The ratio of the BPP of an actual beam to that of an ideal Gaussian beam at the same wavelength is denoted M2 ("M squared"). This parameter is a wavelength-independent measure of beam quality.
The general wave equation, assuming paraxial approximation, yields:
.
With:
the half angle in far field
the beam waist
the beam quality factor, M squared
the wavelength.
The quality of a beam is important for many applications. In fiber-optic communications beams with an M2 close to 1 are required for coupling to single-mode optical fiber. Laser machine shops care a lot about the M2 parameter of their lasers because the beams will focus to an area that is M4 times larger than that of a Gaussian beam with the same wavelength and D4σ waist width; in other words, the fluence scales as 1/M4. The rule of thumb is that M2 increases as the laser power increases. It is difficult to obtain excellent beam quality and high average power (100 W to kWs) due to thermal lensing in the laser gain medium.
Measurement
There are several ways to define the width of a beam. When measuring the beam parameter product and M2, one uses the D4σ or "second moment" width of the beam to determine both the radius of the beam's waist and the divergence in the far field.
The BPP can be easily measured by placing an array detector or scanning-slit profiler at multiple positions within the beam after focusing it with a lens of high optical quality and known focal length. To properly obtain the BPP and M2 the following steps must be followed:
Measure the D4σ widths at 5 axial positions near the beam waist (the location where the beam is narrowest).
Measure the D4σ widths at 5 axial positions at least one Rayleigh length away from the waist.
Fit the 10 measured data points to , where and is the second moment of the distribution in the x or y direction (see ), and is the location of the beam waist with second moment width of . Fitting the 10 data points yields M2, , and . Siegman showed that all beam profiles—Gaussian, flat top, TEMxy, or any shape—must follow the equation above provided that the beam radius uses the D4σ definition of the beam width. Using other definitions of beam width does not work.
In principle, one could use a single measurement at the waist to obtain the waist diameter, a single measurement in the far field to obtain the divergence, and then use these to calculate the BPP. The procedure above gives a more accurate result in practice, however.
High-power lasers, such as those used in laser welding and cutting are typically measured by using a beamsplitter to sample the beam. The sampled beam has much lower intensity and can be measured by a scanning-slit or knife-edge profiler. Good beam quality is very important in laser welding and cutting operations.
See also
Etendue
List of laser articles
References
Further reading
Laser science
Optical quantities | Beam parameter product | [
"Physics",
"Mathematics"
] | 713 | [
"Optical quantities",
"Quantity",
"Physical quantities"
] |
4,444,573 | https://en.wikipedia.org/wiki/Dip-coating | Dip coating is an industrial coating process which is used, for example, to manufacture bulk products such as coated fabrics and condoms and specialised coatings for example in the biomedical field. Dip coating is also commonly used in academic research, where many chemical and nano material engineering research projects use the dip coating technique to create thin-film coatings.
The earliest dip-coated products may have been candles. For flexible laminar substrates such as fabrics, dip coating may be performed as a continuous roll-to-roll process. For coating a 3D object, it may simply be inserted and removed from the bath of coating. For condom-making, a former is dipped into the coating. For some products, such as early methods of making candles, the process is repeated many times, allowing a series of thin films to bulk up to a relatively thick final object.
The final product may incorporate the substrate and the coating, or the coating may be peeled off to form an object which consists solely of the dried or solidified coating, as in the case of a condom.
As a popular alternative to Spin coating, dip-coating methods are frequently employed to produce thin films from sol-gel precursors for research purposes, where it is generally used for applying films onto flat or cylindrical substrates.
Process
The dip-coating process can be separated into five stages:
Immersion: The substrate is immersed in the solution of the coating material at a constant speed (preferably jitter-free).
Start-up: The substrate has remained inside the solution for a while and is starting to be pulled up.
Deposition: The thin layer deposits itself on the substrate while it is pulled up. The withdrawing is carried out at a constant speed to avoid any jitters. The speed determines the thickness of the coating (faster withdrawal gives thicker coating material).
Drainage: Excess liquid will drain from the surface.
Evaporation: The solvent evaporates from the liquid, forming the thin layer. For volatile solvents, such as alcohols, evaporation starts already during the deposition and drainage steps.
In the continuous process, the steps are carried out directly after each other.
Many factors contribute to determining the final state of the dip coating of a thin film. A large variety of repeatable dip coated film structures and thicknesses can be fabricated by controlling many factors: functionality of the initial substrate surface, submersion time, withdrawal speed, number of dipping cycles, solution composition, concentration and temperature, number of solutions in each dipping sequence, and environment humidity. The dip coating technique can give uniform, high quality films even on bulky, complex shapes.
Applications in research
The dip coating technique is used for making thin films by self-assembly and with the sol-gel technique. Self-assembly can give film thicknesses of exactly one mono layer. The sol-gel technique creates films of increased, precisely controlled thickness that are mainly determined by the deposition speed and solution viscosity. As an emerging field, nano particles are often used as a coating material. Dip coating applications include:
Multilayer sensor coatings
Implant functionalist
Hydro gels
Sol-Gel nano particle coatings
Self-assembled mono layers
Layer-by-layer nano particle assemblies.
Nanoparticle coatings
Dip coating have been utilized for example in the fabrication of bioceramic nanoparticles, biosensors, implants and hybrid coatings. For example, dip coating has been used to establish a simple yet fast nonthermal coating method to immobilize hydroxyapatite and TiO2 nanoparticles on polymethyl methacrylate.
In another study, porous cellulose nanocrystals and poly(vinyl alcohol) CNC/PVA nanocomposite films with a thickness of 25−70 nm were deposited on glass substrates using dip coating.
Sol-gel technique
Dip coating of inorganic sols (or so-called sol-gel synthesis) is a way of creating thin inorganic or polymeric coatings. In sol-gel synthesis the speed of deposition is an important parameter that affects, for example, layer thickness, density and porosity.
The sol-gel technique is a deposition method that is widely used in material science to create protective coatings, optical coatings, ceramic coatings and similar surfaces. This technique starts with the hydrolysis of a liquid precursor (sol), which undergoes poly-condensation to gradually obtain a gel. This gel is a bi-phasic system containing both a liquid phase (solvent) and a solid phase (integrated network, typically polymer network). The proportion of liquid is reduced stepwise. The rest of the liquid can be removed by drying and can be coupled with a thermal treatment to tailor the material properties of the solid.
See also
Nanoparticle deposition
Sol-gel
Coatings
References
Coatings
Thin film deposition
de:Chemical Solution Deposition#Tauchbeschichtung | Dip-coating | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 990 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Planes (geometry)",
"Solid state engineering"
] |
4,444,651 | https://en.wikipedia.org/wiki/Evil%20number | In number theory, an evil number is a non-negative integer that has an even number of 1s in its binary expansion. These numbers give the positions of the zero values in the Thue–Morse sequence, and for this reason they have also been called the Thue–Morse set. Non-negative integers that are not evil are called odious numbers.
Examples
The first evil numbers are:
0, 3, 5, 6, 9, 10, 12, 15, 17, 18, 20, 23, 24, 27, 29, 30, 33, 34, 36, 39 ...
Equal sums
The partition of the non-negative integers into the odious and evil numbers is the unique partition of these numbers into two sets that have equal multisets of pairwise sums.
As 19th-century mathematician Eugène Prouhet showed, the partition into evil and odious numbers of the numbers from to , for any , provides a solution to the Prouhet–Tarry–Escott problem of finding sets of numbers whose sums of powers are equal up to the th power.
In computer science
In computer science, an evil number is said to have even parity.
References
Integer sequences | Evil number | [
"Mathematics"
] | 243 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
4,446,319 | https://en.wikipedia.org/wiki/Nuclear%20run-on | A nuclear run-on assay is conducted to identify the genes that are being transcribed at a certain time point. Approximately one million cell nuclei are isolated and incubated with labeled nucleotides, and genes in the process of being transcribed are detected by hybridization of extracted RNA to gene specific probes on a blot. Garcia-Martinez et al. (2004) developed a protocol for the yeast S. cerevisiae (Genomic run-on, GRO) that allows for the calculation of transcription rates (TRs) for all yeast genes to estimate mRNA stabilities for all yeast mRNAs.
Alternative microarray methods have recently been developed, mainly PolII RIP-chip: RNA immunoprecipitation of RNA polymerase II with phosphorylated C-terminal domain directed antibodies and hybridization on a microarray slide or chip (the word chip in the name stems from "ChIP-chip" where a special Affymetrix GeneChip was required). A comparison of methods based on run-on and ChIP-chip has been made in yeast (Pelechano et al., 2009). A general correspondence of both methods has been detected but GRO is more sensitive and quantitative. It has to be considered that run-on only detects elongating RNA polymerases whereas ChIP-chip detects all present RNA polymerases, including backtracked ones.
Attachment of new RNA polymerase to genes is prevented by inclusion of sarkosyl. Therefore only genes that already have an RNA polymerase will produce labeled transcripts. RNA transcripts that were synthesized before the addition of the label will not be detected as they will lack the label. These run on transcripts can also be detected by purifying labeled transcripts by using antibodies that detect the label and hybridizing these isolated transcripts with gene expression arrays or by next generation sequencing (GRO-Seq).
Run on assays have been largely supplanted with Global Run on assays that use next generation DNA sequencing as a readout platform. These assays are known as GRO-Seq and provide an incredibly detailed view of genes engaged in transcription with quantitative levels of expression. Array based methods for analyzing Global run on (GRO) assays are being replaced with Next Generation Sequencing which eliminates the design of probes against gene sequences. Sequencing will catalog all transcripts produced even if they are not reported in databases. GRO-seq involves the labeling of newly synthesized transcripts with bromouridine (BrU). Cells or nuclei are incubated with BrUTP in the presence of sarkosyl, which prevents the attachment of RNA polymerase to the DNA. Therefore only RNA polymerase that are already on the DNA before the addition of sarkosyl will produce new transcripts that will be labeled with BrU. The labeled transcripts are captured with anti-BrU antibody labeled beads, converted to cDNAs and then sequenced by Next Generation DNA sequencing. The sequencing reads are then aligned to the genome and number of reads per transcript provide an accurate estimate of the number of transcripts synthesized.
References
Gene expression
Genetics techniques | Nuclear run-on | [
"Chemistry",
"Engineering",
"Biology"
] | 650 | [
"Genetics techniques",
"Gene expression",
"Genetic engineering",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
4,446,652 | https://en.wikipedia.org/wiki/Snowmelt | In hydrology, snowmelt is surface runoff produced from melting snow. It can also be used to describe the period or season during which such runoff is produced. Water produced by snowmelt is an important part of the annual water cycle in many parts of the world, in some cases contributing high fractions of the annual runoff in a watershed. Predicting snowmelt runoff from a drainage basin may be a part of designing water control projects. Rapid snowmelt can cause flooding. If the snowmelt is then frozen, very dangerous conditions and accidents can occur, introducing the need for salt to melt the ice.
Energy fluxes related to snowmelt
There are several energy fluxes involved in the melting of snow. These fluxes can act in opposing directions, that is either delivering heat to or removing heat from the snowpack. Ground heat flux is the energy delivered to the snowpack from the soil below by conduction. Radiation inputs to the snowpack include net shortwave (solar radiation including visible and ultraviolet light) and longwave (infrared) radiation. Net shortwave radiation is the difference in energy received from the sun and that reflected by the snowpack because of the snowpack albedo. Longwave radiation is received by the snowpack from many sources, including ozone, carbon dioxide, and water vapor present in all levels of the atmosphere. Longwave radiation is also emitted by the snowpack in the form near–black-body radiation, where snow has an emissivity between 0.97 and 1.0. Generally the net longwave radiation term is negative, meaning a net loss of energy from the snowpack. Latent temperature flux is the energy removed from or delivered to the snowpack which accompanies the mass transfers of evaporation, sublimation, or condensation. Sensible heat flux is the heat flux due to convection between the air and snowpack.
Thaw circles around tree trunks
Tree trunks absorbing sunlight become warmer than the air and cause earlier melting of snow around them. The snow does not melt slower gradually with distance from the trunk, but rather creates a wall surrounding snow-free ground around it. According to some of sources, North American spring ephermal plants like spring beauty (Claytonia caroliniana), trout lily (Erythronium americanum) and red trillium (Trillium erectum L.) benefit from such thaw circles. They can emerge earlier inside these circles, what gives them more time before development of tree canopy foliage cutting off significant portion of the light. They perform nearly all of their yearly photosynthesis during this period.
Evergreen trees tend to produce larger thaw circles than deciduous trees. This involves largely a different mechanism and spring ephemeral plants don't occur there.
The snow melts earlier in forest also for example on microtopographic mounds (small elevations) or in wet places like edges of creeks or in seeps. These microsites affect distribution of many herbs too.
Historical cases
In northern Alaska, the melt-date has advanced by 8 days since the mid-1960s. Decreased snowfall in winter followed by warmer spring conditions seems to be the cause for the advance.
In Europe, the 2012 heat wave has especially been anomalous at higher altitudes. For the first time on record, some of the highest Alpine peaks in Europe were snow-free. Although it would seem that the two were related, the question of how much of this is due to climate change firmly remains a center of debate.
Increased water runoff due to snowmelt was a cause of many famous floods. One well-known example is the Red River Flood of 1997, when the Red River of the North in the Red River Valley of the United States and Canada flooded. Flooding in the Red River Valley is augmented by the fact that the river flows north through Winnipeg, Manitoba and into Lake Winnipeg. As snow in Minnesota, North Dakota, and South Dakota begins to melt and flow into the Red River, the presence of downstream ice can act as a dam and force upstream water to rise. Colder temperatures downstream can also potentially lead to freezing of water as it flows north, thus augmenting the ice dam problem. Some areas in British Columbia are also prone to snowmelt flooding as well.
Scholarly conversation
The date of annual melt is of great interest as a potential indicator of climate change. In order to determine whether the earlier disappearance of spring snow cover in northern Alaska is related to global warming versus an appearance of a more natural, continual cycle of the climate, further study and monitoring is necessary.
Large year-to-year variability complicates the picture and furthers the debate. Inter-annual variability of springtime snow pack comes largely from variability of winter month precipitation which is in turn related to the variability of key patterns of atmospheric circulation.
A study of the mountains in the western United States show a region wide decline in spring snow-pack since the mid-1900s, dominated by loss at low elevations where winter temperatures are near freezing. These losses are an indication of increased temperatures which lead to snow loss via some combination of increased regularity of rain versus snow and increased melting during winter months. These natural variations make it challenging to quantify trends with confidence, to deduce observed changes to predict future climate, or to clearly detect changes in snow-pack due to human impact on warming trends.
See also
Albedo
Freshet
Ice melt
Snowmelt system
Snowpack
Gallery
References
Hydrology
Snow | Snowmelt | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,104 | [
"Hydrology",
"Environmental engineering"
] |
4,447,184 | https://en.wikipedia.org/wiki/Homoiconicity | In computer programming, homoiconicity (from the Greek words homo- meaning "the same" and icon meaning "representation") is an informal property of some programming languages. A language is homoiconic if a program written in it can be manipulated as data using the language. The program's internal representation can thus be inferred just by reading the program itself. This property is often summarized by saying that the language treats code as data. The informality of the property arises from the fact that, strictly, this applies to almost all programming languages. No consensus exists on a precise definition of the property.
In a homoiconic language, the primary representation of programs is also a data structure in a primitive type of the language itself. This makes metaprogramming easier than in a language without this property: reflection in the language (examining the program's entities at runtime) depends on a single, homogeneous structure, and it does not have to handle several different structures that would appear in a complex syntax. Homoiconic languages typically include full support of syntactic macros, allowing the programmer to express transformations of programs in a concise way.
A commonly cited example is Lisp, which was created to allow for easy list manipulations and where the structure is given by S-expressions that take the form of nested lists, and can be manipulated by other Lisp code. Other examples are the programming languages Clojure (a contemporary dialect of Lisp), Rebol (also its successor Red), Refal, Prolog, and possibly Julia (see the section “Implementation methods” for more details).
History
The term first appeared in connection with the TRAC programming language, developed by Calvin Mooers:
The last sentence above is annotated with footnote 4, which gives credit for the origin of the term:
The researchers implicated in this quote might be neurophysiologist and cybernetician Warren Sturgis McCulloch (note the difference in the surname from the note) and philosopher, logician and mathematician Charles Sanders Peirce. Pierce indeed used the term "icon" in his Semiotic Theory. According to Peirce, there are three kinds of sign in communication: the icon, the index and the symbol. The icon is the simplest representation: an icon physically resembles that which it denotes.
Alan Kay used and possibly popularized the term "homoiconic" through his use of the term in his 1969 PhD thesis:
Uses and advantages
One advantage of homoiconicity is that extending the language with new concepts typically becomes simpler, as data representing code can be passed between the meta and base layer of the program. The abstract syntax tree of a function may be composed and manipulated as a data structure in the meta layer, and then evaluated. It can be much easier to understand how to manipulate the code since it can be more easily understood as simple data (since the format of the language itself is as a data format).
A typical demonstration of homoiconicity is the meta-circular evaluator.
Implementation methods
All Von Neumann architecture systems, which includes the vast majority of general purpose computers today, can implicitly be described as homoiconic due to the way that raw machine code executes in memory, the data type being bytes in memory. However, this feature can also be abstracted to the programming language level.
Languages such as Lisp and its dialects, such as Scheme, Clojure, and Racket employ S-expressions to achieve homoiconicity, and are considered the "Purest" forms of homoiconicity, as these languages use the same representation for both data and code.
Other languages provide data structures for easily and efficiently manipulating code. Notable examples of this weaker form of homoiconicity include Julia, Nim, and Elixir.
Languages often considered to be homoiconic include:
Adenine
Nim
Curl
Elixir
Io
Julia
Prolog
Rebol
Red
SNOBOL
Tcl
XSLT
REFAL
Rexx
Wolfram Language
In Lisp
Lisp uses S-expressions as an external representation for data and code. S-expressions can be read with the primitive Lisp function READ. READ returns Lisp data: lists, symbols, numbers, strings. The primitive Lisp function EVAL uses Lisp code represented as Lisp data, computes side-effects and returns a result. The result will be printed by the primitive function PRINT, which creates an external S-expression from Lisp data.
Lisp data, a list using different data types: (sub)lists, symbols, strings and integer numbers.
((:name "john" :age 20) (:name "mary" :age 18) (:name "alice" :age 22))
Lisp code. The example uses lists, symbols and numbers.
(* (sin 1.1) (cos 2.03)) ; in infix: sin(1.1)*cos(2.03)
Create above expression with the primitive Lisp function LIST and set the variable EXPRESSION to the result
(setf expression (list '* (list 'sin 1.1) (list 'cos 2.03)) )
-> (* (SIN 1.1) (COS 2.03)) ; Lisp returns and prints the result
(third expression) ; the third element of the expression
-> (COS 2.03)
Change the COS term to SIN
(setf (first (third expression)) 'SIN)
; The expression is now (* (SIN 1.1) (SIN 2.03)).
Evaluate the expression
(eval expression)
-> 0.7988834
Print the expression to a string
(print-to-string expression)
-> "(* (SIN 1.1) (SIN 2.03))"
Read the expression from a string
(read-from-string "(* (SIN 1.1) (SIN 2.03))")
-> (* (SIN 1.1) (SIN 2.03)) ; returns a list of lists, numbers and symbols
In Prolog
1 ?- X is 2*5.
X = 10.
2 ?- L = (X is 2*5), write_canonical(L).
is(_, *(2, 5))
L = (X is 2*5).
3 ?- L = (ten(X):-(X is 2*5)), write_canonical(L).
:-(ten(A), is(A, *(2, 5)))
L = (ten(X):-X is 2*5).
4 ?- L = (ten(X):-(X is 2*5)), assert(L).
L = (ten(X):-X is 2*5).
5 ?- ten(X).
X = 10.
6 ?-
On line 4 we create a new clause. The operator :- separates the head and the body of a clause. With assert/1* we add it to the existing clauses (add it to the "database"), so we can call it later. In other languages we would call it "creating a function during runtime". We can also remove clauses from the database with abolish/1, or retract/1.
* The number after the clause's name is the number of arguments it can take. It is also called arity.
We can also query the database to get the body of a clause:
7 ?- clause(ten(X),Y).
Y = (X is 2*5).
8 ?- clause(ten(X),Y), Y = (X is Z).
Y = (X is 2*5),
Z = 2*5.
9 ?- clause(ten(X),Y), call(Y).
X = 10,
Y = (10 is 2*5).
call is analogous to Lisp's eval function.
In Rebol
The concept of treating code as data and the manipulation and evaluation thereof can be demonstrated very neatly in Rebol. (Rebol, unlike Lisp, does not require parentheses to separate expressions).
The following is an example of code in Rebol (Note that >> represents the interpreter prompt; spaces between some elements have been added for readability):
>> repeat i 3 [ print [ i "hello" ] ]
1 hello
2 hello
3 hello
(repeat is in fact a built-in function in Rebol and is not a language construct or keyword).
By enclosing the code in square brackets, the interpreter does not evaluate it, but merely treats it as a block containing words:
[ repeat i 3 [ print [ i "hello" ] ] ]
This block has the type block! and can furthermore be assigned as the value of a word by using what appears to be a syntax for assignment, but is actually understood by the interpreter as a special type (set-word!) and takes the form of a word followed by a colon:
>> block1: [ repeat i 3 [ print [ i "hello" ] ] ] ;; Assign the value of the block to the word `block1`
== [repeat i 3 [print [i "hello"]]]
>> type? block1 ;; Evaluate the type of the word `block1`
== block!
The block can still be interpreted by using the do function provided in Rebol (similar to eval in Lisp).
It is possible to interrogate the elements of the block and change their values, thus altering the behavior of the code if it were to be evaluated:
>> block1/3 ;; The third element of the block
== 3
>> block1/3: 5 ;; Set the value of the 3rd element to 5
== 5
>> probe block1 ;; Show the changed block
== [repeat i 5 [print [i "hello"]]]
>> do block1 ;; Evaluate the block
1 hello
2 hello
3 hello
4 hello
5 hello
See also
Cognitive dimensions of notations, design principles for programming languages' syntax
Concatenative programming language
Language-oriented programming
Symbolic programming
Self-modifying code
Metaprogramming, a programming technique for which homoiconicity is very useful
Reification (computer science)
Notes
References
External links
Definition of Homoiconic at the C2 Wiki
Programming language topics | Homoiconicity | [
"Engineering"
] | 2,169 | [
"Software engineering",
"Programming language topics"
] |
4,447,338 | https://en.wikipedia.org/wiki/1%2C4%2C7-Triazacyclononane | 1,4,7-Triazacyclononane, known as "TACN" which is pronounced "tack-en," is an aza-crown ether with the formula (C2H4NH)3. TACN is derived, formally speaking, from cyclononane by replacing three equidistant CH2 groups with NH groups. TACN is one of the oligomers derived from aziridine, C2H4NH. Other members of the series include piperazine, C4H8(NH)2, and the cyclic tetramer 1,4,7,10-tetraazacyclododecane.
Synthesis
The ligand is prepared from diethylene triamine as follows by macrocyclization using ethyleneglycol ditosylate.
H2NCH2CH2NHCH2CH2NH2 + 3 TsCl → Ts(H)NCH2CH2N(Ts)CH2CHH2N(H)Ts + 3 HCl
Ts(H)NCH2CH2N(Ts)CH2CH2N(H)Ts + 2 NaOEt → Ts(Na)NCH2CH2N(Ts)CH2CH2N(Na)Ts
Ts(Na)NCHH2CH2N(Ts)CH2CH2N(Na)Ts + TsOCH2CH2OTs + → [(CH2CH2N(Ts)]3 + 2 NaOTs
[(CH2CH2N(Ts)]3 + 3 H2O → [CH2CH2NH]3 + 3 HOTs
Coordination chemistry
TACN is a popular tridentate ligand. It is threefold symmetric and binds to one face of an octahedron of metalloids and transition metals. The (TACN)M unit is kinetically inert, allowing further synthetic transformations on the other coordination sites. A bulky analogue of TACN, is the N,N',N"-trimethylated analogue trimethyltriazacyclononane.
Illustrative complexes
Although TACN characteristically coordinates to metals in mid- and high oxidation states, e.g. Ni(III), Mn(IV), Mo(III), W(III), exceptions occur. To illustrate, 1,4,7-triazacyclononane reacts readily with Mo(CO)6 and W(CO)6 to produce the respective air-stable tricarbonyl compounds, [(κ3 -TACN)Mo(CO)3] and [(κ3-TACN)W(CO)3]. Both have an oxidation state of zero. After further reacting with 30% H2O2, the products are [(κ3-TACN)MoO3] and [(κ3-TACN)WO3]. Both of these oxo complexes have an oxidation state of 6. The macrocyclic ligand does dissociate in the course of this dramatic change in formal oxidation state of the metal.
The complex, [κ3-TACN)Cu(II)Cl2], a catalyst for hydrolytic cleavage of phosphodiester bonds in DNA, is prepared as follows from TACN trihydrochloride:
TACN·3HCl + CuCl2·3H2O + 3 NaOH → [(κ3-TACN)CuCl2] + 6 H2O + 3 NaCl
Mn-TACN complexes catalyze epoxidation of alkenes such as styrene using H2O2 as an oxidant in a carbonate buffered methanol solution at a pH of 8.0. These reagents are considered environmentally benign,
[(κ3-TACN)Mn] + H2O2 + NaHCO3 + (C6H5)C2H3→ [(κ3-TACN)Mn] + 2H2O + CO2 + (C6H5)C2H2O
Chromium (II) sources, e.g. created by heating CrCl3.6H2O in DMSO react with TACN to form both 1:1 Cr:and 2:1 complexes, e.g. yellow [(TACN)2Cr]3+.
References
Ethyleneamines
Chelating agents
Macrocycles
Secondary amines
Tridentate ligands | 1,4,7-Triazacyclononane | [
"Chemistry"
] | 944 | [
"Organic compounds",
"Chelating agents",
"Macrocycles",
"Process chemicals"
] |
679,218 | https://en.wikipedia.org/wiki/ESC/Java | ESC/Java (and more recently ESC/Java2), the "Extended Static Checker for Java," is a programming tool that attempts to find common run-time errors in Java programs at compile time. The underlying approach used in ESC/Java is referred to as extended static checking, which is a collective name referring to a range of techniques for statically checking the correctness of various program constraints. For example, that an integer variable is greater-than-zero, or lies between the bounds of an array. This technique was pioneered in ESC/Java (and its predecessor, ESC/Modula-3) and can be thought of as an extended form of type checking. Extended static checking usually involves the use of an automated theorem prover and, in ESC/Java, the Simplify theorem prover was used.
ESC/Java is neither sound nor complete. This was intentional and aims to reduce the number of errors and/or warnings reported to the programmer, in order to make the tool more useful in practice. However, it does mean that: firstly, there are programs that ESC/Java will erroneously consider to be incorrect (known as false-positives); secondly, there are incorrect programs it will consider to be correct (known as false-negatives). Examples in the latter category include errors arising from modular arithmetic and/or multithreading.
ESC/Java was originally developed at the Compaq Systems Research Center (SRC). SRC launched the project in 1997, after work on their original extended static checker, ESC/Modula-3, ended in 1996. In 2002, SRC released the source code for ESC/Java and related tools. Recent versions of ESC/Java are based around the Java Modeling Language (JML). Users can control the amount and kinds of checking by annotating their programs with specially formatted comments or pragmas.
The University of Nijmegen's Security of Systems group released alpha versions of ESC/Java2, an extended version of ESC/Java that processes the JML specification language through 2004. From 2004 to 2009, ESC/Java2 development was managed by the KindSoftware Research Group at University College Dublin, which in 2009 moved to the IT University of Copenhagen, and in 2012 to the Technical University of Denmark. Over the years, ESC/Java2 has gained many new features including the ability to reason with multiple theorem provers and integration with Eclipse.
OpenJML, the successor of ESC/Java2, is available for Java 1.8. The source is available at https://github.com/OpenJML
See also
Java Modeling Language (JML)
References
Notes
External links
Java Programming Toolkit Source Release
ESC/Java2 at KindSoftware
SRC-RR-159 Extended Static Checking. - David L. Detlefs, K. Rustan M. Leino, Greg Nelson, James B. Saxe
Extended Static Checking Computer Science & Engineering Colloquia. University of Washington. 1999.
2002 software
Static program analysis tools
Formal methods tools
Formal specification languages
Free computer programming tools | ESC/Java | [
"Mathematics"
] | 652 | [
"Formal methods tools",
"Mathematical software"
] |
679,294 | https://en.wikipedia.org/wiki/Diffuse%20reflection | Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection. An ideal diffuse reflecting surface is said to exhibit Lambertian reflection, meaning that there is equal luminance when viewed from all directions lying in the half-space adjacent to the surface.
A surface built from a non-absorbing powder such as plaster, or from fibers such as paper, or from a polycrystalline material such as white marble, reflects light diffusely with great efficiency. Many common materials exhibit a mixture of specular and diffuse reflection.
The visibility of objects, excluding light-emitting ones, is primarily caused by diffuse reflection of light: it is diffusely-scattered light that forms the image of the object in an observer's eye over a wide range of angles of the observer with respect to the object.
Mechanism
Diffuse reflection from solids is generally not due to surface roughness. A flat surface is indeed required to give specular reflection, but it does not prevent diffuse reflection. A piece of highly polished white marble remains white; no amount of polishing will turn it into a mirror. Polishing produces some specular reflection, but the remaining light continues to be diffusely reflected.
The most general mechanism by which a surface gives diffuse reflection does not involve exactly the surface: most of the light is contributed by scattering centers beneath the surface, as illustrated in Figure 1.
If one were to imagine that the figure represents snow, and that the polygons are its (transparent) ice crystallites, an impinging ray is partially reflected (a few percent) by the first particle, enters in it, is again reflected by the interface with the second particle, enters in it, impinges on the third, and so on, generating a series of "primary" scattered rays in random directions, which, in turn, through the same mechanism, generate a large number of "secondary" scattered rays, which generate "tertiary" rays, and so forth. All these rays walk through the snow crystallites, which do not absorb light, until they arrive at the surface and exit in random directions. The result is that the light that was sent out is returned in all directions, so that snow is white despite being made of transparent material (ice crystals).
For simplicity, "reflections" are spoken of here, but more generally the interface between the small particles that constitute many materials is irregular on a scale comparable with light wavelength, so diffuse light is generated at each interface, rather than a single reflected ray, but the story can be told the same way.
This mechanism is very general, because almost all common materials are made of "small things" held together. Mineral materials are generally polycrystalline: one can describe them as made of a 3D mosaic of small, irregularly shaped defective crystals. Organic materials are usually composed of fibers or cells, with their membranes and their complex internal structure. And each interface, inhomogeneity or imperfection can deviate, reflect or scatter light, reproducing the above mechanism.
Few materials do not cause diffuse reflection: among these are metals, which do not allow light to enter; gases, liquids, glass, and transparent plastics (which have a liquid-like amorphous microscopic structure); single crystals, such as some gems or a salt crystal; and some very special materials, such as the tissues which make the cornea and the lens of an eye. These materials can reflect diffusely, however, if their surface is microscopically rough, like in a frost glass (Figure 2), or, of course, if their homogeneous structure deteriorates, as in cataracts of the eye lens.
A surface may also exhibit both specular and diffuse reflection, as is the case, for example, of glossy paints as used in home painting, which give also a fraction of specular reflection, while matte paints give almost exclusively diffuse reflection.
Most materials can give some specular reflection, provided that their surface can be polished to eliminate irregularities comparable with the light wavelength (a fraction of a micrometer). Depending on the material and surface roughness, reflection may be mostly specular, mostly diffuse, or anywhere in between. A few materials, like liquids and glasses, lack the internal subdivisions which produce the subsurface scattering mechanism described above, and so give only specular reflection. Among common materials, only polished metals can reflect light specularly with high efficiency, as in aluminum or silver usually used in mirrors. All other common materials, even when perfectly polished, usually give not more than a few percent specular reflection, except in particular cases, such as grazing angle reflection by a lake, or the total reflection of a glass prism, or when structured in certain complex configurations such as the silvery skin of many fish species or the reflective surface of a dielectric mirror. Diffuse reflection can be highly efficient, as in white materials, due to the summing up of the many subsurface reflections.
Colored objects
Up to this point white objects have been discussed, which do not absorb light. But the above scheme continues to be valid in the case that the material is absorbent. In this case, diffused rays will lose some wavelengths during their walk in the material, and will emerge colored.
Diffusion affects the color of objects in a substantial manner because it determines the average path of light in the material, and hence to which extent the various wavelengths are absorbed. Red ink looks black when it stays in its bottle. Its vivid color is only perceived when it is placed on a scattering material (e.g. paper). This is so because light's path through the paper fibers (and through the ink) is only a fraction of millimeter long. However, light from the bottle has crossed several centimeters of ink and has been heavily absorbed, even in its red wavelengths.
And, when a colored object has both diffuse and specular reflection, usually only the diffuse component is colored. A cherry reflects diffusely red light, absorbs all other colors and has a specular reflection which is essentially white (if the incident light is white light). This is quite general, because, except for metals, the reflectivity of most materials depends on their refractive index, which varies little with the wavelength (though it is this variation that causes the chromatic dispersion in a prism), so that all colors are reflected nearly with the same intensity.
Importance for vision
The vast majority of visible objects are seen primarily by diffuse reflection from their surface.
Exceptions include objects with polished (specularly reflecting) surfaces, and objects that themselves emit light. Rayleigh scattering is responsible for the blue color of the sky, and Mie scattering for the white color of the water droplets in clouds.
Interreflection
Diffuse interreflection is a process whereby light reflected from an object strikes other objects in the surrounding area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground, walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected light is also colored, resulting in similar coloration of surrounding objects.
In 3D computer graphics, diffuse interreflection is an important component of global illumination. There are a number of ways to model diffuse interreflection when rendering a scene. Radiosity and photon mapping are two commonly used methods.
Spectroscopy
Diffuse reflectance spectroscopy can be used to determine the absorption spectra of powdered samples in cases where transmission spectroscopy is not feasible. This applies to UV-Vis-NIR spectroscopy or mid-infrared spectroscopy.
See also
Diffuser
List of reflected light sources
Oren–Nayar reflectance model
Reflectivity
Remission
References
Optical phenomena
Shading | Diffuse reflection | [
"Physics"
] | 1,619 | [
"Optical phenomena",
"Physical phenomena"
] |
679,582 | https://en.wikipedia.org/wiki/Hypocenter | A hypocenter or hypocentre (), also called ground zero or surface zero, is the point on the Earth's surface directly below a nuclear explosion, meteor air burst, or other mid-air explosion. In seismology, the hypocenter of an earthquake is its point of origin below ground; a synonym is the focus of an earthquake.
Generally, the terms ground zero and surface zero are also used in relation to epidemics, and other disasters to mark the point of the most severe damage or destruction. The term is distinguished from the term zero point in that the latter can also be located in the air, underground, or underwater.
Trinity, Hiroshima and Nagasaki
The term "ground zero" originally referred to the hypocenter of the Trinity test in Jornada del Muerto desert near Socorro, New Mexico, and the atomic bombings of Hiroshima and Nagasaki in Japan. The United States Strategic Bombing Survey of the atomic attacks, released in June 1946, used the term liberally, defining it as:
William Laurence, an embedded reporter with the Manhattan Project, reported that "Zero" was "the code name given to the spot chosen for the [Trinity] test" in 1945.
The Oxford English Dictionary, citing the use of the term in a 1946 New York Times report on the destroyed city of Hiroshima, defines ground zero as "that part of the ground situated immediately under an exploding bomb, especially an atomic one."
The term was military slang, used at the Trinity site where the weapon tower for the first nuclear weapon was at "point zero", and moved into general use very shortly after the end of World War II. At Hiroshima, the hypocenter of the attack was Shima Hospital, approximately away from the intended aiming point at Aioi Bridge.
The Pentagon
During the Cold War, the Pentagon (headquarters of United States Department of Defense in Arlington County, Virginia) was an assured target in the event of nuclear war. The open space in the center of the Pentagon became known informally as ground zero. A snack bar that used to be located at the center of this open space was nicknamed "Cafe Ground Zero".
World Trade Center
During the September 11 attacks in 2001, two aircraft were hijacked by 10 al-Qaeda terrorists and were flown into the Twin Towers of the World Trade Center in New York City, causing massive damage and starting fires that caused the weakened 110-story skyscrapers to collapse. The destroyed World Trade Center site soon became known as "ground zero". Rescue workers also used the term "The Big Momma!", referring to the pile of rubble that was left after the buildings collapsed.
Even after the site was cleaned up and construction on the new One World Trade Center and the National September 11 Memorial & Museum were well under way, the term was still frequently used to refer to the site, as when opponents of the Park51 project that was to be located two blocks away from the site labeled it the "Ground Zero mosque".
In advance of the 10th anniversary of the attacks, New York City mayor Michael Bloomberg urged that the "ground zero" moniker be retired, saying, "…the time has come to call those what they are: The World Trade Center and the National September 11th Memorial and Museum."
Meteor air bursts
The hypocenter of a meteor air burst, an asteroid or comet that explodes in the atmosphere rather than strike the surface, is the closest point on the surface to the explosion. The Tunguska event occurred in Siberia in 1908 and flattened an estimated 80 million trees over an area of of forest. The trees at the hypocenter of the blast were left standing, but all their limbs had been blown off by the shockwave. The 2013 Chelyabinsk meteor's hypocenter in Russia was more populated than that of Tunguska, resulting in civil damage and injury, mostly from flying glass shards from broken windows.
Earthquakes
An earthquake's hypocenter or focus is the position where the strain energy stored in the rock is first released, marking the point where the fault begins to rupture. This occurs directly beneath the epicenter, at a distance known as the hypocentral depth or focal depth.
The focal depth can be calculated from measurements based on seismic wave phenomena. As with all wave phenomena in physics, there is uncertainty in such measurements that grows with the wavelength so the focal depth of the source of these long-wavelength (low frequency) waves is difficult to determine exactly. Very strong earthquakes radiate a large fraction of their released energy in seismic waves with very long wavelengths and therefore a stronger earthquake involves the release of energy from a larger mass of rock.
Computing the hypocenters of foreshocks, main shock, and aftershocks of earthquakes allows the three-dimensional plotting of the fault along which movement is occurring. The expanding wavefront from the earthquake's rupture propagates at a speed of several kilometers per second; this seismic wave is what is measured at various surface points in order to geometrically determine an initial guess as to the hypocenter. The wave reaches each station based upon how far away it was from the hypocenter. A number of things need to be taken into account, most importantly variations in the waves speed based upon the materials that it is passing through. With adjustments for velocity changes, the initial estimate of the hypocenter is made, then a series of linear equations is set up, one for each station. The equations express the difference between the observed arrival times and those calculated from the initial estimated hypocenter. These equations are solved by the method of least squares which minimizes the sum of the squares of the differences between the observed and calculated arrival times, and a new estimated hypocenter is computed. The system iterates until the location is pinpointed within the margin of error for the velocity computations.
See also
List of meteor air bursts
List of nuclear weapon explosion sites
Tenet, a 2020 film that includes a sub-surface nuclear "hypocenter" in its storyline
References
External links
Seismology
Geometric centers
Atomic bombings of Hiroshima and Nagasaki
Cold War terminology
Metaphors referring to objects
Metaphors referring to places
Metaphors referring to war and violence
Military slang and jargon
September 11 attacks | Hypocenter | [
"Physics",
"Mathematics"
] | 1,285 | [
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
679,596 | https://en.wikipedia.org/wiki/Continuous%20wavelet%20transform | In mathematics, the continuous wavelet transform (CWT) is a formal (i.e., non-numerical) tool that provides an overcomplete representation of a signal by letting the translation and scale parameter of the wavelets vary continuously.
Definition
The continuous wavelet transform of a function at a scale and translational value is expressed by the following integral
where is a continuous function in both the time domain and the frequency domain called the mother wavelet and the overline represents operation of complex conjugate. The main purpose of the mother wavelet is to provide a source function to generate the daughter wavelets which are simply the translated and scaled versions of the mother wavelet. To recover the original signal , the first inverse continuous wavelet transform can be exploited.
is the dual function of and
is admissible constant, where hat means Fourier transform operator. Sometimes, , then the admissible constant becomes
Traditionally, this constant is called wavelet admissible constant. A wavelet whose admissible constant satisfies
is called an admissible wavelet. To recover the original signal , the second inverse continuous wavelet transform can be exploited.
This inverse transform suggests that a wavelet should be defined as
where is a window. Such defined wavelet can be called as an analyzing wavelet, because it admits to time-frequency analysis. An analyzing wavelet is unnecessary to be admissible.
Scale factor
The scale factor either dilates or compresses a signal. When the scale factor is relatively low, the signal is more contracted which in turn results in a more detailed resulting graph. However, the drawback is that low scale factor does not last for the entire duration of the signal. On the other hand, when the scale factor is high, the signal is stretched out which means that the resulting graph will be presented in less detail. Nevertheless, it usually lasts the entire duration of the signal.
Continuous wavelet transform properties
In definition, the continuous wavelet transform is a convolution of the input data sequence with a set of functions generated by the mother wavelet. The convolution can be computed by using a fast Fourier transform (FFT) algorithm. Normally, the output is a real valued function except when the mother wavelet is complex. A complex mother wavelet will convert the continuous wavelet transform to a complex valued function. The power spectrum of the continuous wavelet transform can be represented by .
Applications of the wavelet transform
One of the most popular applications of wavelet transform is image compression. The advantage of using wavelet-based coding in image compression is that it provides significant improvements in picture quality at higher compression ratios over conventional techniques. Since wavelet transform has the ability to decompose complex information and patterns into elementary forms, it is commonly used in acoustics processing and pattern recognition, but it has been also proposed as an instantaneous frequency estimator. Moreover, wavelet transforms can be applied to the following scientific research areas: edge and corner detection, partial differential equation solving, transient detection, filter design, electrocardiogram (ECG) analysis, texture analysis, business information analysis and gait analysis. Wavelet transforms can also be used in Electroencephalography (EEG) data analysis to identify epileptic spikes resulting from epilepsy. Wavelet transform has been also successfully used for the interpretation of time series of landslides and land subsidence, and for calculating the changing periodicities of epidemics.
Continuous Wavelet Transform (CWT) is very efficient in determining the damping ratio of oscillating signals (e.g. identification of damping in dynamic systems). CWT is also very resistant to the noise in the signal.
See also
Continuous wavelet
S transform
Time-frequency analysis
Cauchy wavelet
References
Further reading
A. Grossmann & J. Morlet, 1984, Decomposition of Hardy functions into square integrable wavelets of constant shape, Soc. Int. Am. Math. (SIAM), J. Math. Analys., 15, 723–736.
Lintao Liu and Houtse Hsu (2012) "Inversion and normalization of time-frequency transform" AMIS 6 No. 1S pp. 67S-74S.
Stéphane Mallat, "A wavelet tour of signal processing" 2nd Edition, Academic Press, 1999,
Ding, Jian-Jiun (2008), Time-Frequency Analysis and Wavelet Transform, viewed 19 January 2008
Polikar, Robi (2001), The Wavelet Tutorial, viewed 19 January 2008
WaveMetrics (2004), Time Frequency Analysis, viewed 18 January 2008
Valens, Clemens (2004), A Really Friendly Guide to Wavelets, viewed 18 September 2018]
Mathematica Continuous Wavelet Transform
Lewalle, Jacques: Continuous wavelet transform, viewed 6 February 2010
External links
Theory of continuous functions
Integral transforms
fr:Ondelette#Transformée en ondelettes continue | Continuous wavelet transform | [
"Mathematics"
] | 1,005 | [
"Theory of continuous functions",
"Topology"
] |
679,919 | https://en.wikipedia.org/wiki/Girdler%20sulfide%20process | The Girdler sulfide (GS) process, also known as the GeibSpevack (GS) process, is an industrial production method for extracting heavy water (deuterium oxide, D2O) from natural water. Heavy water is used in particle research, in deuterium NMR spectroscopy, deuterated solvents for proton NMR spectroscopy, heavy water nuclear reactors (as a coolant and moderator) and deuterated drugs.
In 1943, Karl-Hermann Geib and Jerome S. Spevack independently invented the process.The process is named after the Gildler Company, which constructed the first American plant to implement it.
The method is an isotopic exchange process, where isotopes of hydrogen are swapped between hydrogen sulfide (H2S) and water (H2O), also known as "light" water, that produces heavy water over several steps. This process is highly energy intensive.
Until its closure in 1997, the Bruce Heavy Water Plant in Ontario (located on the same site as Douglas Point and the Bruce Nuclear Generating Station) was the world's largest heavy water production plant, with a peak capacity of 1600 tonnes per year (800 tonnes per year per full plant, two fully operational plants at its peak). It used the Girdler sulfide process to produce heavy water, and required by mass 340000 units of feed water to produce 1 unit of heavy water.
The first such facility of India's Heavy Water Board to use the Girdler process is at Rawatbhata near Kota, Rajasthan. This was followed by a larger plant at Manuguru, Andhra Pradesh. Other plants exist in the United States and Romania for example. Romania, India and the former supplier of much of the world's heavy water demand, Canada, all have operating heavy water reactors with two at Cernavoda Nuclear Power Plant in Romania making up the country's entire fleet and several each in India (mostly IPHWR) and Canada (exclusively CANDU).
The process
Each of a number of steps consists of two sieve tray columns. One column is maintained at and is called the 'cold tower' and the other at and is called the 'hot tower'. The enrichment process is based on the difference in separation between 30°C and 130°C.
The process of interest is the equilibrium reaction,
{| width="450"
| style="width:50%; height:30px;"| H2O + HDS HDO + H2S
|}
At 30°C, the equilibrium constant K = 2.33, while at 130°C, K = 1.82. This difference is exploited for enriching deuterium in heavy water.
Hydrogen sulfide gas is circulated in a closed loop between the cold tower and the hot tower (although these can be separate towers, they can also be separate sections of one tower, with the cold section at the top). Demineralised and deaerated water is fed to the cold tower where deuterium migration preferentially takes place from the hydrogen sulfide gas to the liquid water. Normal water is fed to the hot tower where deuterium transfer takes place from the liquid water to the hydrogen sulfide gas. In cascade systems, the same water is used for both inputs. The mechanism for this is the difference in the equilibrium constant; in the cold tower, deuterium concentration in the hydrogen sulfide is lowered, and the concentration in the water raised. The deuterium in the hot loop slightly prefers to be in the hydrogen sulfide, resulting in excess deuterium in the hydrogen sulfide relative to the cold tower. For n moles of deuterium per mole of protium in the hot tower input water, there are moles per mole of protium in the hydrogen sulfide. In the cold tower, part of this deuterium is transferred to the cold tower input water, in accordance with the equilibrium constant. At the input to the cold tower, the ratio of products to reactants in the above equation is 1.82, since both input streams have equal concentrations of deuterium. The chemical equilibrium tries to force more deuterium into the water to correct the ratio. Ideally for equal amounts of water and hydrogen sulfide, the cold tower should output water with 12% more deuterium than it entered. Enriched water is output from the cold tower, while depleted water is output from the hot tower.
An appropriate cascade system accomplishes enrichment: enriched water is fed into another separation unit and is further enriched.
Normally in this process, water is enriched to 1520% D2O. Further enrichment to "reactor-grade" heavy water (> 99% D2O) is done in another process, e.g. distillation.
References
Industrial processes
Isotope separation
Name reactions
German inventions of the Nazi period
American inventions | Girdler sulfide process | [
"Chemistry"
] | 1,001 | [
"Name reactions"
] |
680,115 | https://en.wikipedia.org/wiki/Chelation%20therapy | Chelation therapy is a medical procedure that involves the administration of chelating agents to remove heavy metals from the body. Chelation therapy has a long history of use in clinical toxicology and remains in use for some very specific medical treatments, although it is administered under very careful medical supervision due to various inherent risks, including the mobilization of mercury and other metals through the brain and other parts of the body by the use of weak chelating agents that unbind with metals before elimination, exacerbating existing damage. To avoid mobilization, some practitioners of chelation use strong chelators, such as selenium, taken at low doses over a long period of time.
Chelation therapy must be administered with care as it has a number of possible side effects, including death. In response to increasing use of chelation therapy as alternative medicine and in circumstances in which the therapy should not be used in conventional medicine, various health organizations have confirmed that medical evidence does not support the effectiveness of chelation therapy for any purpose other than the treatment of heavy metal poisoning. Over-the-counter chelation products are not approved for sale in the United States.
Medical uses
Chelation therapy is the preferred medical treatment for metal poisoning, including acute mercury, iron (including in cases of sickle-cell disease and thalassemia), arsenic, lead, uranium, plutonium and other forms of toxic metal poisoning. The chelating agent may be administered intravenously, intramuscularly, or orally, depending on the agent and the type of poisoning.
Chelating agents
There are a variety of common chelating agents with differing affinities for different metals, physical characteristics, and biological mechanism of action. For the most common forms of heavy metal intoxication – lead, arsenic, or mercury – a number of chelating agents are available. Dimercaptosuccinic acid (DMSA) has been recommended by poison control centers around the world for the treatment of lead poisoning in children. Other chelating agents, such as 2,3-dimercaptopropanesulfonic acid (DMPS) and alpha lipoic acid (ALA), are used in conventional and alternative medicine. Some common chelating agents are ethylenediaminetetraacetic acid (EDTA), 2,3-dimercaptopropanesulfonic acid (DMPS), and thiamine tetrahydrofurfuryl disulfide (TTFD). Calcium-disodium EDTA and DMSA are only approved for the removal of lead by the Food and Drug Administration while DMPS and TTFD are not approved by the FDA. These drugs bind to heavy metals in the body and prevent them from binding to other agents. They are then excreted from the body. The chelating process also removes vital nutrients such as vitamins C and E, therefore these must be supplemented.
The German Environmental Agency (Umweltbundesamt) listed DMSA and DMPS as the two most useful and safe chelating agents available.
Side effects
When used properly in response to a diagnosis of harm from metal toxicity, side effects of chelation therapy include dehydration, low blood calcium, harm to kidneys, increased enzymes as would be detected in liver function tests, allergic reactions, and lowered levels of dietary elements. When administered inappropriately, there are the additional risks of hypocalcaemia (low calcium levels), neurodevelopmental disorders, and death.
History
Chelation therapy can be traced back to the early 1930s, when Ferdinand Münz, a German chemist working for I.G. Farben, first synthesized ethylenediaminetetraacetic acid (EDTA). Munz was looking for a replacement for citric acid as a water softener. Chelation therapy itself began during World War II when chemists at the University of Oxford searched for an antidote for lewisite, an arsenic-based chemical weapon. The chemists learned that EDTA was particularly effective in treating lead poisoning.
Following World War II, chelation therapy was used to treat workers who had painted United States naval vessels with lead-based paints. In the 1950s, Norman Clarke Sr. was treating workers at a battery factory for lead poisoning when he noticed that some of his patients had improved angina pectoris following chelation therapy. Clarke subsequently administered chelation therapy to patients with angina pectoris and other occlusive vascular disease and published his findings in The American Journal of the Medical Sciences in December 1956. He hypothesized that "EDTA could dissolve disease-causing plaques in the coronary systems of human beings." In a series of 283 patients treated by Clarke et al. From 1956 to 1960, 87% showed improvement in their symptomatology. Other early medical investigators made similar observations of EDTA's role in the treatment of cardiovascular disease (Bechtel, 1956; Bessman, 1957; Perry, 1961; Szekely, 1963; Wenig, 1958: and Wilder, 1962).
In 1973, a group of practicing physicians created the Academy of Medical Preventics (now the American College for Advancement in Medicine). The academy trains and certifies physicians in the safe administration of chelation therapy. Members of the academy continued to use EDTA therapy for the treatment of vascular disease and developed safer administration protocols.
In the 1960s, BAL was modified into DMSA, a related dithiol with far fewer side effects. DMSA quickly replaced both BAL and EDTA as the primary treatment for lead, arsenic and mercury poisoning in the United States. Esters of DMSA have been developed which are reportedly more effective; for example, the monoisoamyl ester (MiADMSA) is reportedly more effective than DMSA at clearing mercury and cadmium. Research in the former Soviet Union led to the introduction of DMPS, another dithiol, as a mercury-chelating agent. The Soviets also introduced ALA, which is transformed by the body into the dithiol dihydrolipoic acid, a mercury- and arsenic-chelating agent. DMPS has experimental status in the United States, while ALA is a common nutritional supplement.
Since the 1970s, iron chelation therapy has been used as an alternative to regular phlebotomy to treat excess iron stores in people with haemochromatosis. Other chelating agents have been discovered. They all function by making several chemical bonds with metal ions, thus rendering them much less chemically reactive. The resulting complex is water-soluble, allowing it to enter the bloodstream and be excreted harmlessly.
Calcium-disodium EDTA chelation has been studied by the U.S. National Center for Complementary and Alternative Medicine for treating coronary disease. In 1998, the U.S. Federal Trade Commission (FTC) pursued the American College for Advancement in Medicine (ACAM), an organization that promotes "complementary, alternative and integrative medicine" over the claims made regarding the treatment of atherosclerosis in advertisements for EDTA chelation therapy. The FTC concluded that there was a lack of scientific studies to support these claims and that the statements by the ACAM were false. In 1999, the ACAM agreed to stop presenting chelation therapy as effective in treating heart disease, avoiding legal proceedings. In 2010 the U.S. Food and Drug Administration (FDA) warned companies who sold over-the-counter (OTC) chelation products and stated that such "products are unapproved drugs and devices and that it is a violation of federal law to make unproven claims about these products. There are no FDA-approved OTC chelation products."
Controversies
In 1998, the U.S. Federal Trade Commission (FTC) charged that the web site of the American College for Advancement in Medicine (ACAM) and a brochure they published had made false or unsubstantiated claims. In December 1998, the FTC announced that it had secured a consent agreement barring ACAM from making unsubstantiated advertising claims that chelation therapy is effective against atherosclerosis or any other disease of the circulatory system.
In August 2005, doctor error led to the death of a five-year-old boy with autism who was undergoing chelation therapy. Others, including a three-year-old non-autistic girl and a non-autistic adult, have died while undergoing chelation therapy. These deaths were due to cardiac arrest caused by hypocalcemia during chelation therapy. In two of the cases, hypocalcemia appears to have been caused by the administration of Na2EDTA (disodium EDTA) and in the third case the type of EDTA was unknown. Only the three-year-old girl had been found to have an elevated blood lead level and resulting low iron levels and anemia, which is the conventional medical cause for administration of chelation therapy.
According to protocol, EDTA should not be used in the treatment of children. More than 30 deaths have been recorded in association with IV-administered disodium EDTA since the 1970s.
Use in alternative medicine
In alternative medicine, some practitioners claim chelation therapy can treat a variety of ailments, including heart disease and autism. The use of chelation therapy by alternative medicine practitioners for behavioral and other disorders is considered pseudoscientific; there is no proof that it is effective. Chelation therapy prior to heavy metal testing can artificially raise urinary heavy metal concentrations ("provoked" urine testing) and lead to inappropriate and unnecessary treatment. The American College of Medical Toxicology and the American Academy of Clinical Toxicology warn the public that chelating drugs used in chelation therapy may have serious side effects, including liver and kidney damage, blood pressure changes, allergies and in some cases even death of the patient.
Cancer
The American Cancer Society says of chelation therapy: "Available scientific evidence does not support claims that it is effective for treating other conditions such as cancer. Chelation therapy can be toxic and has the potential to cause kidney damage, irregular heartbeat, and even death."
Cardiovascular disease
According to the findings of a 1997 systematic review, EDTA chelation therapy is not effective as a treatment for coronary artery disease and this use is not approved in the United States by the US Food and Drug Administration (FDA).
The American Heart Association stated in 1997 that there is "no scientific evidence to demonstrate any benefit from this form of therapy." The FDA, the National Institutes of Health (NIH) and the American College of Cardiology "all agree with the American Heart Association" that "there have been no adequate, controlled, published scientific studies using currently approved scientific methodology to support this therapy for cardiovascular disease."
A systematic review published in 2005 found that controlled scientific studies did not support chelation therapy for heart disease. It found that very small trials and uncontrolled descriptive studies have reported benefits while larger controlled studies have found results no better than placebo.
In 2009, the Montana Board of Medical Examiners issued a position paper concluding that "chelation therapy has no proven efficacy in the treatment of cardiovascular disease, and in some patients could be injurious."
The U.S. National Center for Complementary and Alternative Medicine (NCCAM) conducted a trial on the chelation therapy's safety and efficacy for patients with coronary artery disease. NCCAM Director Stephen E. Straus cited the "widespread use of chelation therapy in lieu of established therapies, the lack of adequate prior research to verify its safety and effectiveness, and the overall impact of coronary artery disease" as factors motivating the trial. The study has been criticized by some who said it was unethical, unnecessary and dangerous, and that multiple studies conducted prior to it demonstrated that the treatment provides no benefit.
The US National Center for Complementary and Alternative Medicine began the Trial to Assess Chelation Therapy (TACT) in 2003. Patient enrollment was to be completed around July 2009 with final completion around July 2010, but enrollment in the trial was voluntarily suspended by organizers in September 2008 after the Office for Human Research Protections began investigating complaints such as inadequate informed consent. Additionally, the trial was criticized for lacking prior Phase I and II studies, and critics summarized previous controlled trials as having "found no evidence that chelation is superior to placebo for treatment of CAD or PVD." The same critics argued that methodological flaws and lack of prior probability made the trial "unethical, dangerous, pointless, and wasteful." The American College of Cardiology supported the trial and research to explore whether chelation therapy was effective in treating heart disease. Evidence of insurance fraud and other felony convictions among (chelation proponent) investigators further undermined the credibility of the trial.
The final results of TACT were published in November 2012. The authors concluded that disodium EDTA chelation "modestly" reduced the risk of adverse cardiovascular outcomes among stable patients with a history of myocardial infarction. The study also showed a "marked" reduction in cardiovascular events in diabetic patients treated with EDTA chelation. An editorial published in the Journal of the American Medical Association said that "the study findings may provide novel hypotheses that merit further evaluation to help understand the pathophysiology of secondary prevention of vascular disease." Critics of the study characterized the study as showing no support for the use of chelation therapy in coronary heart disease, particularly the claims to reduce the need for coronary artery bypass grafting (CABG, pronounced "cabbage").
Autism
Quackwatch says that autism is one of the conditions for which chelation therapy has been falsely promoted as effective, and practitioners falsify diagnoses of metal poisoning to trick parents into having their children undergo the risky process. , up to 7% of children with autism worldwide had been subjected to chelation therapy. The death of two children in 2005 was caused by the administration of chelation treatments, according to the American Center for Disease Control. One of them had autism. Parents either have a doctor use a treatment for lead poisoning, or buy unregulated supplements, in particular DMSA and lipoic acid. Aspies For Freedom, an autism rights organization, considers this use of chelation therapy unethical and potentially dangerous. There is little to no credible scientific research that supports the use of chelation therapy for the effective treatment of autism.
See also
List of ineffective cancer treatments
Detoxification
References
External links
Chelation Therapy: Unproven Claims and Unsound Theories - Quackwatch
Detoxification
Alternative therapies for developmental and learning disabilities
Alternative cancer treatments
Alternative detoxification
Alternative medical treatments
Autism pseudoscience
Toxic effects of metals
Metal metabolism | Chelation therapy | [
"Chemistry"
] | 3,033 | [
"Metal metabolism",
"Metabolism"
] |
681,185 | https://en.wikipedia.org/wiki/Stress%20concentration | In solid mechanics, a stress concentration (also called a stress raiser or a stress riser or notch sensitivity) is a location in an object where the stress is significantly greater than the surrounding region. Stress concentrations occur when there are irregularities in the geometry or material of a structural component that cause an interruption to the flow of stress. This arises from such details as holes, grooves, notches and fillets. Stress concentrations may also occur from accidental damage such as nicks and scratches.
The degree of concentration of a discontinuity under typically tensile loads can be expressed as a non-dimensional stress concentration factor , which is the ratio of the highest stress to the nominal far field stress. For a circular hole in an infinite plate, . The stress concentration factor should not be confused with the stress intensity factor, which is used to define the effect of a crack on the stresses in the region around a crack tip.
For ductile materials, large loads can cause localised plastic deformation or yielding that will typically occur first at a stress concentration allowing a redistribution of stress and enabling the component to continue to carry load. Brittle materials will typically fail at the stress concentration. However, repeated low level loading may cause a fatigue crack to initiate and slowly grow at a stress concentration leading to the failure of even ductile materials. Fatigue cracks always start at stress raisers, so removing such defects increases the fatigue strength.
Description
Stress concentrations occur when there are irregularities in the geometry or material of a structural component that cause an interruption to the flow of stress.
Geometric discontinuities cause an object to experience a localised increase in stress. Examples of shapes that cause stress concentrations are sharp internal corners, holes, and sudden changes in the cross-sectional area of the object as well as unintentional damage such as nicks, scratches and cracks. High local stresses can cause objects to fail more quickly, so engineers typically design the geometry to minimize stress concentrations.
Material discontinuities, such as inclusions in metals, may also concentrate the stress. Inclusions on the surface of a component may be broken from machining during manufacture leading to microcracks that grow in service from cyclic loading. Internally, the failure of the interfaces around inclusions during loading may lead to static failure by microvoid coalescence.
Stress concentration factor
The stress concentration factor, , is the ratio of the highest stress to a nominal stress of the gross cross-section and defined as
Note that the dimensionless stress concentration factor is a function of the geometry shape and independent of its size. These factors can be found in typical engineering reference materials.
E. Kirsch derived the equations for the elastic stress distribution around a hole. The maximum stress felt near a hole or notch occurs in the area of lowest radius of curvature. In an elliptical hole of length and width , under a far-field stress , the stress at the ends of the major axes is given by Inglis' equation:
where is the radius of curvature of the elliptical hole. For circular holes in an infinite plate where , the stress concentration factor is .
As the radius of curvature approaches zero, such as at the tip of a sharp crack, the maximum stress approaches infinity and a stress concentration factor cannot therefore be used for a crack. Instead, the stress intensity factor which defines the scaling of the stress field around a crack tip, is used.
Causes of Stress Concentration
Stress concentration can arise due to various factors. The following are the main causes of stress concentration:
Material Defects: When designing mechanical components, it is generally presumed that the material used is consistent and homogeneous throughout. In practice, however, material inconsistencies such as internal cracks, blowholes, cavities in welds, air holes in metal parts, and non-metallic or foreign inclusions can occur. These defects act as discontinuities within the component, disrupting the uniform distribution of stress and thereby leading to stress concentration.
Contact Stress: Mechanical components are frequently subjected to forces that are concentrated at specific points or small areas. This localized application of force can result in disproportionately high pressures at these points, causing stress concentration. Typical instances include the interactions at the points of contact in meshing gear teeth, the interfaces between cams and followers, and the contact zones in ball bearings.
Thermal Stress: Thermal stress occurs when different parts of a structure expand or contract at different rates due to variations in temperature. This differential in thermal expansion and contraction generates internal stresses, which can lead to areas of stress concentration within the structure.
Geometric Discontinuities: Features such as steps on a shaft, shoulders, and other abrupt changes in the cross-sectional area of components are often necessary for mounting elements like gears and bearings or for assembly considerations. While these features are essential for the functionality of the device, they introduce sharp transitions in geometry that become hotspots for stress concentration. Additionally, design elements like oil holes, grooves, keyways, splines, and screw threads also introduce discontinuities that further exacerbate stress concentration.
Rough Surface: Imperfections on the surface of components, such as machining scratches, stamp marks, or inspection marks, can interrupt the smooth flow of stress across the surface, leading to localized increases in stress. These imperfections, although often small, can significantly impact the durability and performance of mechanical components by initiating stress concentration.
Methods for determining factors
There are experimental methods for measuring stress concentration factors including photoelastic stress analysis, thermoelastic stress analysis, brittle coatings or strain gauges.
During the design phase, there are multiple approaches to estimating stress concentration factors. Several catalogs of stress concentration factors have been published. Perhaps most famous is Stress Concentration Design Factors by Peterson, first published in 1953. Finite element methods are commonly used in design today. Other methods include the boundary element method and meshfree methods.
Limiting the effects of stress concentrations
Stress concentrations can be mitigated through techniques that smoothen the flow of stress around a discontinuity:
Material Removal: Introducing auxiliary holes in the high stress region to create a more gradual transition. The size and position of these holes must be optimized. Known as crack tip blunting, a counter-intuitive example of reducing one of the worst types of stress concentrations, a crack, is to drill a large hole at the end of the crack. The drilled hole, with its relatively large size, serves to increase the effective crack tip radius and thus reduce the stress concentration.
Hole Reinforcement: Adding higher strength material around the hole, usually in the form of bonded rings or doublers. Composite reinforcements can reduce the SCF.
Shape Optimization: Adjusting the hole shape, often transitioning from circular to elliptical, to minimize stress gradients. This must be checked for feasibility. One example is adding a fillet to internal corners. Another example is in a threaded component, where the force flow line is bent as it passes from shank portion to threaded portion; as a result, stress concentration takes place. To reduce this, a small undercut is made between the shank and threaded portions
Functionally Graded Materials: Using materials with properties that vary gradually can reduce the SCF compared to a sudden change in material.
The optimal mitigation technique depends on the specific geometry, loading scenario, and manufacturing constraints. In general, a combination of methods is required for the best result. While there is no universal solution, careful analysis of the stress flow and parameterization of the model can point designers toward an effective stress reduction strategy.
Examples
The de Havilland Comet aircraft experienced a number of catastrophic failures that were eventually found to be due to fatigue cracks growing from the high stress concentration caused by the use of punched rivet holes around the windows. The square passenger windows were also found to have higher stress concentrations than expected and were redesigned.
Brittle fractures at the corners of hatches in Liberty ships in cold and stressful conditions in winter storms in the Atlantic Ocean.
A focus point of stress on the margins of an implant, where metal meets bone, of an implanted orthosis is very likely to be the point of failure.
References
External links
When Metal Lets Us Down
Engineering concepts
Elasticity (physics) | Stress concentration | [
"Physics",
"Materials_science",
"Engineering"
] | 1,653 | [
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"nan",
"Physical properties"
] |
681,241 | https://en.wikipedia.org/wiki/Creep%20%28deformation%29 | In materials science, creep (sometimes called cold flow) is the tendency of a solid material to undergo slow deformation while subject to persistent mechanical stresses. It can occur as a result of long-term exposure to high levels of stress that are still below the yield strength of the material. Creep is more severe in materials that are subjected to heat for long periods and generally increases as they near their melting point.
The rate of deformation is a function of the material's properties, exposure time, exposure temperature and the applied structural load. Depending on the magnitude of the applied stress and its duration, the deformation may become so large that a component can no longer perform its function – for example creep of a turbine blade could cause the blade to contact the casing, resulting in the failure of the blade. Creep is usually of concern to engineers and metallurgists when evaluating components that operate under high stresses or high temperatures. Creep is a deformation mechanism that may or may not constitute a failure mode. For example, moderate creep in concrete is sometimes welcomed because it relieves tensile stresses that might otherwise lead to cracking.
Unlike brittle fracture, creep deformation does not occur suddenly upon the application of stress. Instead, strain accumulates as a result of long-term stress. Therefore, creep is a "time-dependent" deformation.
Creep or cold flow is of great concern in plastics. Blocking agents are chemicals used to prevent or inhibit cold flow. Otherwise rolled or stacked sheets stick together.
Temperature dependence
The temperature range in which creep deformation occurs depends on the material. Creep deformation generally occurs when a material is stressed at a temperature near its melting point. While tungsten requires a temperature in the thousands of degrees before the onset of creep deformation, lead may creep at room temperature, and ice will creep at temperatures below . Plastics and low-melting-temperature metals, including many solders, can begin to creep at room temperature. Glacier flow is an example of creep processes in ice. The effects of creep deformation generally become noticeable at approximately 35% of the melting point (in Kelvin) for metals and at 45% of melting point for ceramics.
Theoretical framework
Creep behavior can be split into three main stages.
In primary, or transient, creep, the strain rate is a function of time. In Class M materials, which include most pure materials, primary strain rate decreases over time. This can be due to increasing dislocation density, or it can be due to evolving grain size. In class A materials, which have large amounts of solid solution hardening, strain rate increases over time due to a thinning of solute drag atoms as dislocations move.
In the secondary, or steady-state, creep, dislocation structure and grain size have reached equilibrium, and therefore strain rate is constant. Equations that yield a strain rate refer to the steady-state strain rate. Stress dependence of this rate depends on the creep mechanism.
In tertiary creep, the strain rate exponentially increases with stress. This can be due to necking phenomena, internal cracks, or voids, which all decrease the cross-sectional area and increase the true stress on the region, further accelerating deformation and leading to fracture.
Mechanisms of deformation
Depending on the temperature and stress, different deformation mechanisms are activated. Though there are generally many deformation mechanisms active at all times, usually one mechanism is dominant, accounting for almost all deformation.
Various mechanisms are:
Bulk diffusion (Nabarro–Herring creep)
Grain boundary diffusion (Coble creep)
Glide-controlled dislocation creep: dislocations move via glide and climb, and the speed of glide is the dominant factor on strain rate
Climb-controlled dislocation creep: dislocations move via glide and climb, and the speed of climb is the dominant factor on strain rate
Harper–Dorn creep: a low-stress creep mechanism in some pure materials
At low temperatures and low stress, creep is essentially nonexistent and all strain is elastic. At low temperatures and high stress, materials experience plastic deformation rather than creep. At high temperatures and low stress, diffusional creep tends to be dominant, while at high temperatures and high stress, dislocation creep tends to be dominant.
Deformation mechanism maps
Deformation mechanism maps provide a visual tool categorizing the dominant deformation mechanism as a function of homologous temperature, shear modulus-normalized stress, and strain rate. Generally, two of these three properties (most commonly temperature and stress) are the axes of the map, while the third is drawn as contours on the map.
To populate the map, constitutive equations are found for each deformation mechanism. These are used to solve for the boundaries between each deformation mechanism, as well as the strain rate contours.
Deformation mechanism maps can be used to compare different strengthening mechanisms, as well as compare different types of materials.
where ε is the creep strain, C is a constant dependent on the material and the particular creep mechanism, m and b are exponents dependent on the creep mechanism, Q is the activation energy of the creep mechanism, σ is the applied stress, d is the grain size of the material, k is the Boltzmann constant, and T is the absolute temperature.
Dislocation creep
At high stresses (relative to the shear modulus), creep is controlled by the movement of dislocations.
For dislocation creep, Q = Q(self diffusion), 4 ≤ m ≤ 6, and b < 1. Therefore, dislocation creep has a strong dependence on the applied stress and the intrinsic activation energy and a weaker dependence on grain size. As grain size gets smaller, grain boundary area gets larger, so dislocation motion is impeded.
Some alloys exhibit a very large stress exponent (m > 10), and this has typically been explained by introducing a "threshold stress," σth, below which creep can't be measured. The modified power law equation then becomes:
where A, Q and m can all be explained by conventional mechanisms (so 3 ≤ m ≤ 10), and R is the gas constant. The creep increases with increasing applied stress, since the applied stress tends to drive the dislocation past the barrier, and make the dislocation get into a lower energy state after bypassing the obstacle, which means that the dislocation is inclined to pass the obstacle. In other words, part of the work required to overcome the energy barrier of passing an obstacle is provided by the applied stress and the remainder by thermal energy.
Nabarro–Herring creep
Nabarro–Herring (NH) creep is a form of diffusion creep, while dislocation glide creep does not involve atomic diffusion. Nabarro–Herring creep dominates at high temperatures and low stresses. As shown in the figure on the right, the lateral sides of the crystal are subjected to tensile stress and the horizontal sides to compressive stress. The atomic volume is altered by applied stress: it increases in regions under tension and decreases in regions under compression. So the activation energy for vacancy formation is changed by ±σΩ, where Ω is the atomic volume, the positive value is for compressive regions and negative value is for tensile regions. Since the fractional vacancy concentration is proportional to , where Qf is the vacancy-formation energy, the vacancy concentration is higher in tensile regions than in compressive regions, leading to a net flow of vacancies from the regions under tension to the regions under compression, and this is equivalent to a net atom diffusion in the opposite direction, which causes the creep deformation: the grain elongates in the tensile stress axis and contracts in the compressive stress axis.
In Nabarro–Herring creep, k is related to the diffusion coefficient of atoms through the lattice, Q = Q(self diffusion), m = 1, and b = 2. Therefore, Nabarro–Herring creep has a weak stress dependence and a moderate grain size dependence, with the creep rate decreasing as the grain size is increased.
Nabarro–Herring creep is strongly temperature dependent. For lattice diffusion of atoms to occur in a material, neighboring lattice sites or interstitial sites in the crystal structure must be free. A given atom must also overcome the energy barrier to move from its current site (it lies in an energetically favorable potential well) to the nearby vacant site (another potential well). The general form of the diffusion equation is
where D0 has a dependence on both the attempted jump frequency and the number of nearest neighbor sites and the probability of the sites being vacant. Thus there is a double dependence upon temperature. At higher temperatures the diffusivity increases due to the direct temperature dependence of the equation, the increase in vacancies through Schottky defect formation, and an increase in the average energy of atoms in the material. Nabarro–Herring creep dominates at very high temperatures relative to a material's melting temperature.
Coble creep
Coble creep is the second form of diffusion-controlled creep. In Coble creep the atoms diffuse along grain boundaries to elongate the grains along the stress axis. This causes Coble creep to have a stronger grain size dependence than Nabarro–Herring creep, thus, Coble creep will be more important in materials composed of very fine grains. For Coble creep k is related to the diffusion coefficient of atoms along the grain boundary, Q = Q(grain boundary diffusion), m = 1, and b = 3. Because Q(grain boundary diffusion) is less than Q(self diffusion), Coble creep occurs at lower temperatures than Nabarro–Herring creep. Coble creep is still temperature dependent, as the temperature increases so does the grain boundary diffusion. However, since the number of nearest neighbors is effectively limited along the interface of the grains, and thermal generation of vacancies along the boundaries is less prevalent, the temperature dependence is not as strong as in Nabarro–Herring creep. It also exhibits the same linear dependence on stress as Nabarro–Herring creep. Generally, the diffusional creep rate should be the sum of Nabarro–Herring creep rate and Coble creep rate. Diffusional creep leads to grain-boundary separation, that is, voids or cracks form between the grains. To heal this, grain-boundary sliding occurs. The diffusional creep rate and the grain boundary sliding rate must be balanced if there are no voids or cracks remaining. When grain-boundary sliding can not accommodate the incompatibility, grain-boundary voids are generated, which is related to the initiation of creep fracture.
Solute drag creep
Solute drag creep is one of the mechanisms for power-law creep (PLC), involving both dislocation and diffusional flow. Solute drag creep is observed in certain metallic alloys. In these alloys, the creep rate increases during the first stage of creep (Transient creep) before reaching a steady-state value. This phenomenon can be explained by a model associated with solid–solution strengthening. At low temperatures, the solute atoms are immobile and increase the flow stress required to move dislocations. However, at higher temperatures, the solute atoms are more mobile and may form atmospheres and clouds surrounding the dislocations. This is especially likely if the solute atom has a large misfit in the matrix. The solutes are attracted by the dislocation stress fields and are able to relieve the elastic stress fields of existing dislocations. Thus the solutes become bound to the dislocations. The concentration of solute, C, at a distance, r, from a dislocation is given by the Cottrell atmosphere defined as
where C0 is the concentration at r = ∞ and β is a constant which defines the extent of segregation of the solute. When surrounded by a solute atmosphere, dislocations that attempt to glide under an applied stress are subjected to a back stress exerted on them by the cloud of solute atoms. If the applied stress is sufficiently high, the dislocation may eventually break away from the atmosphere, allowing the dislocation to continue gliding under the action of the applied stress. The maximum force (per unit length) that the atmosphere of solute atoms can exert on the dislocation is given by Cottrell and Jaswon
When the diffusion of solute atoms is activated at higher temperatures, the solute atoms which are "bound" to the dislocations by the misfit can move along with edge dislocations as a "drag" on their motion if the dislocation motion or the creep rate is not too high. The amount of "drag" exerted by the solute atoms on the dislocation is related to the diffusivity of the solute atoms in the metal at that temperature, with a higher diffusivity leading to lower drag and vice versa. The velocity at which the dislocations glide can be approximated by a power law of the form
where m is the effective stress exponent, Q is the apparent activation energy for glide and B0 is a constant. The parameter B in the above equation was derived by Cottrell and Jaswon for interaction between solute atoms and dislocations on the basis of the relative atomic size misfit εa of solutes to be
where k is the Boltzmann constant, and r1 and r2 are the internal and external cut-off radii of dislocation stress field. c0 and Dsol are the atomic concentration of the solute and solute diffusivity respectively. Dsol also has a temperature dependence that makes a determining contribution to Qg.
If the cloud of solutes does not form or the dislocations are able to break away from their clouds, glide occurs in a jerky manner where fixed obstacles, formed by dislocations in combination with solutes, are overcome after a certain waiting time with support by thermal activation. The exponent m is greater than 1 in this case. The equations show that the hardening effect of solutes is strong if the factor B in the power-law equation is low so that the dislocations move slowly and the diffusivity Dsol is low. Also, solute atoms with both high concentration in the matrix and strong interaction with dislocations are strong gardeners. Since misfit strain of solute atoms is one of the ways they interact with dislocations, it follows that solute atoms with large atomic misfit are strong gardeners. A low diffusivity Dsol is an additional condition for strong hardening.
Solute drag creep sometimes shows a special phenomenon, over a limited strain rate, which is called the Portevin–Le Chatelier effect. When the applied stress becomes sufficiently large, the dislocations will break away from the solute atoms since dislocation velocity increases with the stress. After breakaway, the stress decreases and the dislocation velocity also decreases, which allows the solute atoms to approach and reach the previously departed dislocations again, leading to a stress increase. The process repeats itself when the next local stress maximum is obtained. So repetitive local stress maxima and minima could be detected during solute drag creep.
Dislocation climb-glide creep
Dislocation climb-glide creep is observed in materials at high temperature. The initial creep rate is larger than the steady-state creep rate. Climb-glide creep could be illustrated as follows: when the applied stress is not enough for a moving dislocation to overcome the obstacle on its way via dislocation glide alone, the dislocation could climb to a parallel slip plane by diffusional processes, and the dislocation can glide on the new plane. This process repeats itself each time when the dislocation encounters an obstacle. The creep rate could be written as:
where ACG includes details of the dislocation loop geometry, DL is the lattice diffusivity, M is the number of dislocation sources per unit volume, σ is the applied stress, and Ω is the atomic volume. The exponent m for dislocation climb-glide creep is 4.5 if M is independent of stress and this value of m is consistent with results from considerable experimental studies.
Harper–Dorn creep
Harper–Dorn creep is a climb-controlled dislocation mechanism at low stresses that has been observed in aluminum, lead, and tin systems, in addition to nonmetal systems such as ceramics and ice. It was first observed by Harper and Dorn in 1957. It is characterized by two principal phenomena: a power-law relationship between the steady-state strain rate and applied stress at a constant temperature which is weaker than the natural power-law of creep, and an independent relationship between the steady-state strain rate and grain size for a provided temperature and applied stress. The latter observation implies that Harper–Dorn creep is controlled by dislocation movement; namely, since creep can occur by vacancy diffusion (Nabarro–Herring creep, Coble creep), grain boundary sliding, and/or dislocation movement, and since the first two mechanisms are grain-size dependent, Harper–Dorn creep must therefore be dislocation-motion dependent. The same was also confirmed in 1972 by Barrett and co-workers where FeAl3 precipitates lowered the creep rates by 2 orders of magnitude compared to highly pure Al, thus, indicating Harper–Dorn creep to be a dislocation based mechanism.
Harper–Dorn creep is typically overwhelmed by other creep mechanisms in most situations, and is therefore not observed in most systems. The phenomenological equation which describes Harper–Dorn creep is
where ρ0 is dislocation density (constant for Harper–Dorn creep), Dv is the diffusivity through the volume of the material, G is the shear modulus and b is the Burgers vector, σs, and n is the stress exponent which varies between 1 and 3.
Later investigation of the creep region
Twenty-five years after Harper and Dorn published their work, Mohamed and Ginter made an important contribution in 1982 by evaluating the potential for achieving Harper–Dorn creep in samples of Al using different processing procedures. The experiments showed that Harper–Dorn creep is achieved with stress exponent n = 1, and only when the internal dislocation density prior to testing is exceptionally low. By contrast, Harper–Dorn creep was not observed in polycrystalline Al and single crystal Al when the initial dislocation density was high.
However, various conflicting reports demonstrate the uncertainties at very low stress levels. One report by Blum and Maier, claimed that the experimental evidence for Harper–Dorn creep is not fully convincing. They argued that the necessary condition for Harper–Dorn creep is not fulfilled in Al with 99.99% purity and the steady-state stress exponent n of the creep rate is always much larger than 1.
The subsequent work conducted by Ginter et al. confirmed that Harper–Dorn creep was attained in Al with 99.9995% purity but not in Al with 99.99% purity and, in addition, the creep curves obtained in the very high purity material exhibited regular and periodic accelerations. They also found that the creep behavior no longer follows a stress exponent of n = 1 when the tests are extended to very high strains of >0.1 but instead there is evidence for a stress exponent of n > 2.
Requirements for the occurrence
Harper–Dorn creep is usually regarded as a Newtonian viscous process with n = 1. Some very recent experimental evidence suggests that the stress exponent may be closer to ~2. Harper–Dorn creep should be observed at a low-stress creep regime where the stress exponent is lower than in the conventional power-law regime where n ≈ 3–5.
Unlike Nabarro–Herring diffusion depending on grain size, the Harper–Dorn flow process is independent of grain size. In the initial experiments of Harper and Dorn, identical creep rates are recorded either over a wide range of grain sizes in polycrystalline samples or in a combination of polycrystalline samples and single crystals.
The measured creep rates should be significantly faster, typically by more than two orders of magnitude, than the creep rates anticipated for Nabarro–Herring diffusion creep. At very high testing temperatures, Coble diffusion creep will be of negligible significance under these conditions.
The volumetric activation energy indicates that the rate of Harper–Dorn creep is controlled by vacancy diffusion to and from dislocations, resulting in climb-controlled dislocation motion. Unlike in other creep mechanisms, the dislocation density here is constant and independent of the applied stress.
The dislocation density must be low for Harper–Dorn creep to dominate. The density has been proposed to increase as dislocations move via cross-slip from one slip-plane to another, thereby increasing the dislocation length per unit volume. Cross-slip can also result in jogs along the length of the dislocation, which, if large enough, can act as single-ended dislocation sources.
Sintering
At high temperatures, it is energetically favorable for voids to shrink in a material. The application of tensile stress opposes the reduction in energy gained by void shrinkage. Thus, a certain magnitude of applied tensile stress is required to offset these shrinkage effects and cause void growth and creep fracture in materials at high temperature. This stress occurs at the sintering limit of the system.
The stress tending to shrink voids that must be overcome is related to the surface energy and surface area-volume ratio of the voids. For a general void with surface energy γ and principle radii of curvature of r1 and r2, the sintering limit stress is
Below this critical stress, voids will tend to shrink rather than grow. Additional void shrinkage effects will also result from the application of a compressive stress. For typical descriptions of creep, it is assumed that the applied tensile stress exceeds the sintering limit.
Creep also explains one of several contributions to densification during metal powder sintering by hot pressing. A main aspect of densification is the shape change of the powder particles. Since this change involves permanent deformation of crystalline solids, it can be considered a plastic deformation process and thus sintering can be described as a high temperature creep process. The applied compressive stress during pressing accelerates void shrinkage rates and allows a relation between the steady-state creep power law and densification rate of the material. This phenomenon is observed to be one of the main densification mechanisms in the final stages of sintering, during which the densification rate (assuming gas-free pores) can be explained by:
in which ρ̇ is the densification rate, ρ is the density, Pe is the pressure applied, n describes the exponent of strain rate behavior, and A is a mechanism-dependent constant. A and n are from the following form of the general steady-state creep equation,
where ε̇ is the strain rate, and σ is the tensile stress. For the purposes of this mechanism, the constant A comes from the following expression, where A′ is a dimensionless, experimental constant, μ is the shear modulus, b is the Burgers vector, k is the Boltzmann constant, T is absolute temperature, D0 is the diffusion coefficient, and Q is the diffusion activation energy:
Examples
Polymers
Creep can occur in polymers and metals which are considered viscoelastic materials. When a polymeric material is subjected to an abrupt force, the response can be modeled using the Kelvin–Voigt model. In this model, the material is represented by a Hookean spring and a Newtonian dashpot in parallel. The creep strain is given by the following convolution integral:
where σ is applied stress, C0 is instantaneous creep compliance, C is creep compliance coefficient, τ is retardation time, and f(τ) is the distribution of retardation times.
When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep.
At a time t0, a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time t1 at which the stress is relieved, at which time the strain immediately decreases (discontinuity) then continues decreasing gradually to a residual strain.
Viscoelastic creep data can be presented in one of two ways. Total strain can be plotted as a function of time for a given temperature or temperatures. Below a critical value of applied stress, a material may exhibit linear viscoelasticity. Above this critical stress, the creep rate grows disproportionately faster. The second way of graphically presenting viscoelastic creep in a material is by plotting the creep modulus (constant applied stress divided by total strain at a particular time) as a function of time. Below its critical stress, the viscoelastic creep modulus is independent of the stress applied. A family of curves describing strain versus time response to various applied stress may be represented by a single viscoelastic creep modulus versus time curve if the applied stresses are below the material's critical stress value.
Additionally, the molecular weight of the polymer of interest is known to affect its creep behavior. The effect of increasing molecular weight tends to promote secondary bonding between polymer chains and thus make the polymer more creep resistant. Similarly, aromatic polymers are even more creep resistant due to the added stiffness from the rings. Both molecular weight and aromatic rings add to polymers' thermal stability, increasing the creep resistance of a polymer.
Both polymers and metals can creep. Polymers experience significant creep at temperatures above around ; however, there are three main differences between polymeric and metallic creep. In metals, creep is not linearly viscoelastic, it is not recoverable, and it is only present at high temperatures.
Polymers show creep basically in two different ways. At typical work loads (5% up to 50%) ultra-high-molecular-weight polyethylene (Spectra, Dyneema) will show time-linear creep, whereas polyester or aramids (Twaron, Kevlar) will show a time-logarithmic creep.
Wood
Wood is considered as an orthotropic material, exhibiting different mechanical properties in three mutually perpendicular directions. Experiments show that the tangential direction in solid wood tend display a slightly higher creep compliance than in the radial direction. In the longitudinal direction, the creep compliance is relatively low and usually do not show any time-dependency in comparison to the other directions.
It has also been shown that there is a substantial difference in viscoelastic properties of wood depending on loading modality (creep in compression or tension). Studies have shown that certain Poisson's ratios gradually go from positive to negative values during the duration of the compression creep test, which does not occur in tension.
Concrete
The creep of concrete, which originates from the calcium silicate hydrates (C-S-H) in the hardened Portland cement paste (which is the binder of mineral aggregates), is fundamentally different from the creep of metals as well as polymers. Unlike the creep of metals, it occurs at all stress levels and, within the service stress range, is linearly dependent on the stress if the pore water content is constant. Unlike the creep of polymers and metals, it exhibits multi-months aging, caused by chemical hardening due to hydration which stiffens the microstructure, and multi-year aging, caused by long-term relaxation of self-equilibrated microstresses in the nanoporous microstructure of the C-S-H. If concrete is fully cured, creep effectively ceases.
Metals
Creep in metals primarily manifests as movement in their microstructures. While polymers and metals share some similarities in creep, the behavior of creep in metals displays a different mechanical response and must be modeled differently. For example, with polymers, creep can be modeled using the Kelvin–Voigt model with a Hookean spring dashpot but with metals, the creep can be represented by plastic deformation mechanisms such as dislocation glide, climb and grain boundary sliding. Understanding the mechanisms behind creep in metals is becoming increasingly more important for reliability and material lifetime as the operating temperatures for applications involving metals rise. Unlike polymers, in which creep deformation can occur at very low temperatures, creep for metals typically occur at high temperatures. Key examples would be scenarios in which these metal components like intermetallic or refractory metals are subject to high temperatures and mechanical loads like turbine blades, engine components and other structural elements. Refractory metals, such as tungsten, molybdenum, and niobium, are known for their exceptional mechanical properties at high temperatures, proving to be useful materials in aerospace, defense and electronics industries.
Case studies
Although mostly due to the reduced yield strength at higher temperatures, the collapse of the World Trade Center was due in part to creep from increased temperature.
The creep rate of hot pressure-loaded components in a nuclear reactor at power can be a significant design constraint, since the creep rate is enhanced by the flux of energetic particles.
Creep in epoxy anchor adhesive was blamed for the Big Dig tunnel ceiling collapse in Boston, Massachusetts that occurred in July 2006.
The design of tungsten light bulb filaments attempts to reduce creep deformation. Sagging of the filament coil between its supports increases with time due to the weight of the filament itself. If too much deformation occurs, the adjacent turns of the coil touch one another, causing local overheating, which quickly leads to failure of the filament. The coil geometry and supports are therefore designed to limit the stresses caused by the weight of the filament, and a special tungsten alloy with small amounts of oxygen trapped in the crystallite grain boundaries is used to slow the rate of Coble creep.
Creep can cause gradual cut-through of wire insulation, especially when stress is concentrated by pressing insulated wire against a sharp edge or corner. Special creep-resistant insulations such as Kynar (polyvinylidene fluoride) are used in wire wrap applications to resist cut-through due to the sharp corners of wire wrap terminals. Teflon insulation is resistant to elevated temperatures and has other desirable properties, but is notoriously vulnerable to cold-flow cut-through failures caused by creep.
In steam turbine power plants, pipes carry steam at high temperatures () and pressures (above ). In jet engines, temperatures can reach up to and initiate creep deformation in even advanced-design coated turbine blades. Hence, it is crucial for correct functionality to understand the creep deformation behavior of materials.
Creep deformation is important not only in systems where high temperatures are endured such as nuclear power plants, jet engines and heat exchangers, but also in the design of many everyday objects. For example, metal paper clips are stronger than plastic ones because plastics creep at room temperatures. Aging glass windows are often erroneously used as an example of this phenomenon: measurable creep would only occur at temperatures above the glass transition temperature around . While glass does exhibit creep under the right conditions, apparent sagging in old windows may instead be a consequence of obsolete manufacturing processes, such as that used to create crown glass, which resulted in inconsistent thickness.
Fractal geometry, using a deterministic Cantor structure, is used to model the surface topography, where recent advancements in thermoviscoelastic creep contact of rough surfaces are introduced. Various viscoelastic idealizations are used to model the surface materials, including the Maxwell, Kelvin–Voigt, standard linear solid and Jeffrey models.
Nimonic 75 has been certified by the European Union as a standard creep reference material.
The practice of tinning stranded wires to facilitate the process of connecting the wire to a screw terminal, though having been prevalent and considered standard practice for quite a while, has been discouraged by professional electricians, as solder is likely to creep under the pressure exerted on the tinned wire end by the screw of the terminal, causing the joint to lose tension and hence create a loose contact over time. The accepted practice when connecting stranded wire to a screw terminal is to use a wire ferrule on the end of the wire.
Prevention
Generally, materials have better creep resistance if they have higher melting temperatures, lower diffusivity, and higher shear strength. Close-packed structures are usually more creep resistant as they tend to have lower diffusivity than non-close-packed structures. Common methods to reduce creep include:
Solid solution strengthening: adding other elements in solid solution can slow diffusion, as well as slow dislocation motion via the mechanism of solute drag.
Particle dispersion strengthening: adding particles, often incoherent oxide or carbide particles, block dislocation motion.
Precipitation hardening: precipitating a second phase out of the primary lattice blocks dislocation motion.
Grain size: increasing grain size decreases the amount of grain boundaries, which results in slower creep due to the high diffusion rate along grain boundaries. This is opposite low-temperature applications, where increasing grain size decreases strength by blocking dislocation motion. In very high temperature applications such as jet engine turbines, single crystals are often used.
Superalloys
Materials operating in high-performance systems, such as jet engines, often reach extreme temperatures surpassing , necessitating specialized material design. Superalloys based on cobalt, nickel, and iron have been engineered to be highly resistant to creep. The term ‘superalloy’ generally refers to austenitic nickel-, iron-, or cobalt-based alloys that use either γ′ or γ″ precipitation strengthening to maintain strength at high temperature.
The γ′ phase is a cubic L12-structure phase that produces cuboidal precipitates. Superalloys often have a high (60–75%) volume fraction of γ′ precipitates. γ′ precipitates are coherent with the parent γ phase, and are resistant to shearing due to the development of an anti-phase boundary when the precipitate is sheared. The γ″ phase is a tetragonal Ni3Nb or Ni3V structure. The γ″ phase, however, is unstable above , so γ″ is less commonly used as a strengthening phase in high temperature applications. Carbides are also used in polycrystalline superalloys to inhibit grain boundary sliding.
Many other elements can be added to superalloys to tailor their properties. They can be used for solid solution strengthening, to reduce the formation of undesirable brittle precipitates, and to increase oxidation or corrosion resistance. Nickel-based superalloys have found widespread use in high-temperature, low stress applications. Iron-based superalloys are generally not used at high temperatures as the γ′ phase is not stable in the iron matrix, but are sometimes used at moderately high temperatures, as iron is significantly cheaper than nickel. Cobalt-based γ′ structure was found in 2006, allowing the development of cobalt-based superalloys, which are superior to nickel-based superalloys in corrosion resistance. However, in the base (cobalt–tungsten–aluminum) system, γ′ is only stable below , and cobalt-based superalloys tend to be weaker than their Ni counterparts.
Contributing factors in creep resistivity
1. Stages of creep
Based on the description of creep mechanisms and its three different stages mentioned earlier, creep resistance generally can be accomplished by using materials in which their tertiary stage is not active since, at this stage, the strain rate increases significantly by increasing stress. Therefore, a sound component design should satisfy the primary stage of creep, which has a relatively high initial creep rate that decreases with increasing exposure time, leading to the second stage of creep, in which the creep rate in the material is decelerated and reaches its minimum value via work hardening. The minimum value of creep rate is actually a constant creep rate, which plays a crucial role in designing a component, and its magnitude depends on temperature and stress. The minimum value of creep rate that is commonly applied to alloys is based on two norms: (1) the stress required to produce a creep rate of and (2) the stress required to produce a creep rate of , which takes roughly about 11.5 years. The former standard has widely been used in the component design of turbine blades, while the latter is frequently used in designing steam turbines. One of the primary goals of creep tests is to determine the minimum value of the creep rate at the secondary stage and also to investigate the time required for an ultimate failure of a component. However, when it comes to ceramic applications, there are no such standards to determine their minimum creep rates, but ceramics are often chosen for high-temperature operations under load mainly because they possess a long lifetime. However, by acquiring information obtained from creep tests, proper ceramic material(s) can be selected for desired application to assure the safe service and evaluate the time period of secure service in high-temperature environments that the structural thermostability is essential. Therefore, by making the proper choice, suitable ceramic components may be selected, capable of operating at various conditions of high temperature and creep deformation.
2. Materials selection
Generally speaking, the structures of materials are different. Metallic materials have different structures compared to polymer or ceramics, and even within the same class of materials, different structures might have existed at different temperatures. The difference in the structure includes the difference in grains (for example, their sizes, shapes, and distributions), their crystalline or amorphous nature, and even dislocation and/or vacancy contents prone to change following a deformation. So, since creep is a time-dependent process and differs from material to material, all these parameters must be considered in materials selection for a specific application. For instance, materials with low-dislocation content are suitable for creep resistance. In other words, the dislocation glide and climb can be reduced if the proper material is selected (for example, ceramic materials are very popular in this case). In terms of vacancies, the vacancy content not only depends on the chosen material but also on the component's service temperature. Vacancy-diffusion-controlled processes that promote creep can be categorized into grain boundary diffusion (Coble creep) and lattice diffusion (Nabarro–Herring creep). Therefore, the properties of dislocations and vacancies, their distributions throughout the structure, and their potential change due to long-time exposure to stress and temperature must be considered seriously in materials selection for components design. Therefore, the role of dislocations, vacancies, wide range of obstacles that can retard the dislocation motion, including grain boundaries in polycrystalline materials, solute atoms, precipitates, impurities, and strain fields originating from other dislocations or their pile-ups which increase the lifetime of materials and make them more resistive with respect to creep are always at the top list of materials selection and component design. For instance, different types of vacancies in ceramic materials have different charges stemming from their dominant chemical bonding. So, existing or newly-formed vacancies must be charge-balanced to maintain the overall neutrality of the final structure. Besides paying attention to individual dislocation and vacancy contents, the correlation between them is also worth exploring since dislocation's ability to climb depends heavily on how many vacancies are available. To recap, materials must be selected and developed that possess low dislocation and vacancy contents to have a practical creep resistance component.
3. Various working conditions
Temperature
Creep is related to a material's melting point (Tm). Generally, a high lifetime is expected when we have a higher melting point, and the reason behind selecting materials with high melting points originates from diffusion processes being related to temperature-dependent vacancy concentrations. The diffusion rate is slower in high-temperature materials. Ceramic materials are well known for having high melting points, which is why ceramics attracted considerable attention to creep resistance applications. Although it is widely known that creep starts at a temperature equal to 0.5Tm, the safe temperature to avoid the initiation of creep is 0.3Tm. Creep that starts below or at 0.5Tm is called "low temperature creep" because diffusion is not very progressive at such low temperatures, and the kind of creep that occurs is not diffusion-dominant and is related to other mechanisms.
Time
As mentioned previously, creep is a time-dependent deformation. Fortunately, creep doesn't occur suddenly in brittle materials as it does under tension and other forms of deformation, and it is an advantage for designers. Over time, creep strain develops in a material exposed to stress at the temperature of the application, and it depends on the duration of the exposure. Thus, the creep rate is also a function of time besides temperature and stress. It can be generalized as this function ε = F(t, T, σ) that tells the designer all the three parameters, including time, temperature, and stress acting in concert, and all of them must be considered if a successful creep-resistance component is to be attained.
See also
Biomaterial
Biomechanics
Ductile–brittle transition temperature in materials science
Deformation mechanism
Downhill creep
Hysteresis
Larson–Miller parameter
Stress relaxation
Viscoelasticity
Viscoplasticity
Brittle–ductile transition zone
References
Further reading
Elasticity (physics)
Materials degradation
Deformation (mechanics)
Rubber properties | Creep (deformation) | [
"Physics",
"Materials_science",
"Engineering"
] | 8,553 | [
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Materials science",
"Materials degradation",
"Physical properties"
] |
681,579 | https://en.wikipedia.org/wiki/Quasiparticle | In condensed matter physics, a quasiparticle is a concept used to describe a collective behavior of a group of particles that can be treated as if they were a single particle. Formally, quasiparticles and collective excitations are closely related phenomena that arise when a microscopically complicated system such as a solid behaves as if it contained different weakly interacting particles in vacuum.
For example, as an electron travels through a semiconductor, its motion is disturbed in a complex way by its interactions with other electrons and with atomic nuclei. The electron behaves as though it has a different effective mass travelling unperturbed in vacuum. Such an electron is called an electron quasiparticle. In another example, the aggregate motion of electrons in the valence band of a semiconductor or a hole band in a metal behave as though the material instead contained positively charged quasiparticles called electron holes. Other quasiparticles or collective excitations include the phonon, a quasiparticle derived from the vibrations of atoms in a solid, and the plasmons, a particle derived from plasma oscillation.
These phenomena are typically called quasiparticles if they are related to fermions, and called collective excitations if they are related to bosons, although the precise distinction is not universally agreed upon. Thus, electrons and electron holes (fermions) are typically called quasiparticles, while phonons and plasmons (bosons) are typically called collective excitations.
The quasiparticle concept is important in condensed matter physics because it can simplify the many-body problem in quantum mechanics. The theory of quasiparticles was started by the Soviet physicist Lev Landau in the 1930s.
Overview
General introduction
Solids are made of only three kinds of particles: electrons, protons, and neutrons. None of these are quasiparticles; instead a quasiparticle is an emergent phenomenon that occurs inside the solid. Therefore, while it is quite possible to have a single particle (electron, proton, or neutron) floating in space, a quasiparticle can only exist inside interacting many-particle systems such as solids.
Motion in a solid is extremely complicated: Each electron and proton is pushed and pulled (by Coulomb's law) by all the other electrons and protons in the solid (which may themselves be in motion). It is these strong interactions that make it very difficult to predict and understand the behavior of solids (see many-body problem). On the other hand, the motion of a non-interacting classical particle is relatively simple; it would move in a straight line at constant velocity. This is the motivation for the concept of quasiparticles: The complicated motion of the real particles in a solid can be mathematically transformed into the much simpler motion of imagined quasiparticles, which behave more like non-interacting particles.
In summary, quasiparticles are a mathematical tool for simplifying the description of solids.
Relation to many-body quantum mechanics
The principal motivation for quasiparticles is that it is almost impossible to directly describe every particle in a macroscopic system. For example, a barely-visible (0.1mm) grain of sand contains around 1017 nuclei and 1018 electrons. Each of these attracts or repels every other by Coulomb's law. In principle, the Schrödinger equation predicts exactly how this system will behave. But the Schrödinger equation in this case is a partial differential equation (PDE) on a 3×1018-dimensional vector space—one dimension for each coordinate (x, y, z) of each particle. Directly and straightforwardly trying to solve such a PDE is impossible in practice. Solving a PDE on a 2-dimensional space is typically much harder than solving a PDE on a 1-dimensional space (whether analytically or numerically); solving a PDE on a 3-dimensional space is significantly harder still; and thus solving a PDE on a 3×1018-dimensional space is quite impossible by straightforward methods.
One simplifying factor is that the system as a whole, like any quantum system, has a ground state and various excited states with higher and higher energy above the ground state. In many contexts, only the "low-lying" excited states, with energy reasonably close to the ground state, are relevant. This occurs because of the Boltzmann distribution, which implies that very-high-energy thermal fluctuations are unlikely to occur at any given temperature.
Quasiparticles and collective excitations are a type of low-lying excited state. For example, a crystal at absolute zero is in the ground state, but if one phonon is added to the crystal (in other words, if the crystal is made to vibrate slightly at a particular frequency) then the crystal is now in a low-lying excited state. The single phonon is called an elementary excitation. More generally, low-lying excited states may contain any number of elementary excitations (for example, many phonons, along with other quasiparticles and collective excitations).
When the material is characterized as having "several elementary excitations", this statement presupposes that the different excitations can be combined. In other words, it presupposes that the excitations can coexist simultaneously and independently. This is never exactly true. For example, a solid with two identical phonons does not have exactly twice the excitation energy of a solid with just one phonon, because the crystal vibration is slightly anharmonic. However, in many materials, the elementary excitations are very close to being independent. Therefore, as a starting point, they are treated as free, independent entities, and then corrections are included via interactions between the elementary excitations, such as "phonon-phonon scattering".
Therefore, using quasiparticles / collective excitations, instead of analyzing 1018 particles, one needs to deal with only a handful of somewhat-independent elementary excitations. It is, therefore, an effective approach to simplify the many-body problem in quantum mechanics. This approach is not useful for all systems, however. For example, in strongly correlated materials, the elementary excitations are so far from being independent that it is not even useful as a starting point to treat them as independent.
Distinction between quasiparticles and collective excitations
Usually, an elementary excitation is called a "quasiparticle" if it is a fermion and a "collective excitation" if it is a boson. However, the precise distinction is not universally agreed upon.
There is a difference in the way that quasiparticles and collective excitations are intuitively envisioned. A quasiparticle is usually thought of as being like a dressed particle: it is built around a real particle at its "core", but the behavior of the particle is affected by the environment. A standard example is the "electron quasiparticle": an electron in a crystal behaves as if it had an effective mass which differs from its real mass. On the other hand, a collective excitation is usually imagined to be a reflection of the aggregate behavior of the system, with no single real particle at its "core". A standard example is the phonon, which characterizes the vibrational motion of every atom in the crystal.
However, these two visualizations leave some ambiguity. For example, a magnon in a ferromagnet can be considered in one of two perfectly equivalent ways: (a) as a mobile defect (a misdirected spin) in a perfect alignment of magnetic moments or (b) as a quantum of a collective spin wave that involves the precession of many spins. In the first case, the magnon is envisioned as a quasiparticle, in the second case, as a collective excitation. However, both (a) and (b) are equivalent and correct descriptions. As this example shows, the intuitive distinction between a quasiparticle and a collective excitation is not particularly important or fundamental.
The problems arising from the collective nature of quasiparticles have also been discussed within the philosophy of science, notably in relation to the identity conditions of quasiparticles and whether they should be considered "real" by the standards of, for example, entity realism.
Effect on bulk properties
By investigating the properties of individual quasiparticles, it is possible to obtain a great deal of information about low-energy systems, including the flow properties and heat capacity.
In the heat capacity example, a crystal can store energy by forming phonons, and/or forming excitons, and/or forming plasmons, etc. Each of these is a separate contribution to the overall heat capacity.
History
The idea of quasiparticles originated in Lev Landau's theory of Fermi liquids, which was originally invented for studying liquid helium-3. For these systems a strong similarity exists between the notion of quasiparticle and dressed particles in quantum field theory. The dynamics of Landau's theory is defined by a kinetic equation of the mean-field type. A similar equation, the Vlasov equation, is valid for a plasma in the so-called plasma approximation. In the plasma approximation, charged particles are considered to be moving in the electromagnetic field collectively generated by all other particles, and hard collisions between the charged particles are neglected. When a kinetic equation of the mean-field type is a valid first-order description of a system, second-order corrections determine the entropy production, and generally take the form of a Boltzmann-type collision term, in which figure only "far collisions" between virtual particles. In other words, every type of mean-field kinetic equation, and in fact every mean-field theory, involves a quasiparticle concept.
Common examples
This section contains most commmon examples of quasiparticles and collective excitations.
In solids, an electron quasiparticle is an electron as affected by the other forces and interactions in the solid. The electron quasiparticle has the same charge and spin as a "normal" (elementary particle) electron, and like a normal electron, it is a fermion. However, its mass can differ substantially from that of a normal electron; see the article effective mass. Its electric field is also modified, as a result of electric field screening. In many other respects, especially in metals under ordinary conditions, these so-called Landau quasiparticles closely resemble familiar electrons; as Crommie's "quantum corral" showed, an STM can image their interference upon scattering.
A hole is a quasiparticle consisting of the lack of an electron in a state; it is most commonly used in the context of empty states in the valence band of a semiconductor. A hole has the opposite charge of an electron.
A phonon is a collective excitation associated with the vibration of atoms in a rigid crystal structure. It is a quantum of a sound wave.
A magnon is a collective excitation associated with the electrons' spin structure in a crystal lattice. It is a quantum of a spin wave.
In materials, a photon quasiparticle is a photon as affected by its interactions with the material. In particular, the photon quasiparticle has a modified relation between wavelength and energy (dispersion relation), as described by the material's index of refraction. It may also be termed a polariton, especially near a resonance of the material. For example, an exciton-polariton is a superposition of an exciton and a photon; a phonon-polariton is a superposition of a phonon and a photon.
A plasmon is a collective excitation, which is the quantum of plasma oscillations (wherein all the electrons simultaneously oscillate with respect to all the ions).
A polaron is a quasiparticle which comes about when an electron interacts with the polarization of its surrounding ions.
An exciton is an electron and hole bound together.
See also
Fractionalization
List of quasiparticles
Mean-field theory
Pseudoparticle
Composite fermion
Composite boson
References
Further reading
L. D. Landau, Soviet Phys. JETP. 3: 920 (1957)
L. D. Landau, Soviet Phys. JETP. 5: 101 (1957)
A. A. Abrikosov, L. P. Gor'kov, and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics (1963, 1975). Prentice-Hall, New Jersey; Dover Publications, New York, New York.
D. Pines, and P. Nozières, The Theory of Quantum Liquids (1966). W.A. Benjamin, New York. Volume I: Normal Fermi Liquids (1999). Westview Press, Boulder, Colorado.
J. W. Negele, and H. Orland, Quantum Many-Particle Systems (1998). Westview Press, Boulder, Colorado.
External links
PhysOrg.com – Scientists find new 'quasiparticles'
Curious 'quasiparticles' baffle physicists by Jacqui Hayes, Cosmos 6 June 2008. Accessed June 2008
Physical phenomena
Condensed matter physics
Quantum phases
Mesoscopic physics | Quasiparticle | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,778 | [
"Quantum phases",
"Matter",
"Physical phenomena",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Quasiparticles",
"Mesoscopic physics",
"Subatomic particles"
] |
681,582 | https://en.wikipedia.org/wiki/Effective%20field%20theory | In physics, an effective field theory is a type of approximation, or effective theory, for an underlying physical theory, such as a quantum field theory or a statistical mechanics model. An effective field theory includes the appropriate degrees of freedom to describe physical phenomena occurring at a chosen length scale or energy scale, while ignoring substructure and degrees of freedom at shorter distances (or, equivalently, at higher energies). Intuitively, one averages over the behavior of the underlying theory at shorter length scales to derive what is hoped to be a simplified model at longer length scales. Effective field theories typically work best when there is a large separation between length scale of interest and the length scale of the underlying dynamics. Effective field theories have found use in particle physics, statistical mechanics, condensed matter physics, general relativity, and hydrodynamics. They simplify calculations, and allow treatment of dissipation and radiation effects.
Renormalization group
Presently, effective field theories are discussed in the context of the renormalization group (RG) where the process of integrating out short distance degrees of freedom is made systematic. Although this method is not sufficiently concrete to allow the actual construction of effective field theories, the gross understanding of their usefulness becomes clear through an RG analysis. This method also lends credence to the main technique of constructing effective field theories, through the analysis of symmetries. If there is a single energy scale in the microscopic theory, then the effective field theory can be seen as an expansion in . The construction of an effective field theory accurate to some power of requires a new set of free parameters at each order of the expansion in . This technique is useful for scattering or other processes where the maximum momentum scale satisfies the condition . Since effective field theories are not valid at small length scales, they need not be renormalizable. Indeed, the ever expanding number of parameters at each order in required for an effective field theory means that they are generally not renormalizable in the same sense as quantum electrodynamics which requires only the renormalization of two parameters.
Examples
Fermi theory of beta decay
The best-known example of an effective field theory is the Fermi theory of beta decay. This theory was developed during the early study of weak decays of nuclei when only the hadrons and leptons undergoing weak decay were known. The typical reactions studied were:
This theory posited a pointlike interaction between the four fermions involved in these reactions. The theory had great phenomenological success and was eventually understood to arise from the gauge theory of electroweak interactions, which forms a part of the standard model of particle physics. In this more fundamental theory, the interactions are mediated by a flavour-changing gauge boson, the W±. The immense success of the Fermi theory was because the W particle has mass of about 80 GeV, whereas the early experiments were all done at an energy scale of less than 10 MeV. Such a separation of scales, by over 3 orders of magnitude, has not been met in any other situation as yet.
BCS theory of superconductivity
Another famous example is the BCS theory of superconductivity. Here the underlying theory is the theory of electrons in a metal interacting with lattice vibrations called phonons. The phonons cause attractive interactions between some electrons, causing them to form Cooper pairs. The length scale of these pairs is much larger than the wavelength of phonons, making it possible to neglect the dynamics of phonons and construct a theory in which two electrons effectively interact at a point. This theory has had remarkable success in describing and predicting the results of experiments on superconductivity.
Gravitational field theories
General relativity (GR) itself is expected to be the low energy effective field theory of a full theory of quantum gravity, such as string theory or loop quantum gravity. The expansion scale is the Planck mass.
Effective field theories have also been used to simplify problems in general relativity, in particular in calculating the gravitational wave signature of inspiralling finite-sized objects. The most common EFT in GR is non-relativistic general relativity (NRGR), which is similar to the post-Newtonian expansion. Another common GR EFT is the extreme mass ratio (EMR), which in the context of the inspiralling problem is called extreme mass ratio inspiral.
Other examples
Presently, effective field theories are written for many situations.
One major branch of nuclear physics is quantum hadrodynamics, where the interactions of hadrons are treated as a field theory, which should be derivable from the underlying theory of quantum chromodynamics (QCD). Quantum hadrodynamics is the theory of the nuclear force, similarly to quantum chromodynamics being the theory of the strong interaction and quantum electrodynamics being the theory of the electromagnetic force. Due to the smaller separation of length scales here, this effective theory has some classificatory power, but not the spectacular success of the Fermi theory.
In particle physics the effective field theory of QCD called chiral perturbation theory has had better success. This theory deals with the interactions of hadrons with pions or kaons, which are the Goldstone bosons of spontaneous chiral symmetry breaking. The expansion parameter is the pion energy/momentum.
For hadrons containing one heavy quark (such as the bottom or charm), an effective field theory which expands in powers of the quark mass, called the heavy quark effective theory (HQET), has been found useful.
For hadrons containing two heavy quarks, an effective field theory which expands in powers of the relative velocity of the heavy quarks, called non-relativistic QCD (NRQCD), has been found useful, especially when used in conjunctions with lattice QCD.
For hadron reactions with light energetic (collinear) particles, the interactions with low-energetic (soft) degrees of freedom are described by the soft-collinear effective theory (SCET).
Much of condensed matter physics consists of writing effective field theories for the particular property of matter being studied.
Dissipationless hydrodynamics can also be treated using effective field theories.
See also
Form factor (quantum field theory)
Renormalization group
Quantum field theory
Quantum triviality
Ginzburg–Landau theory
References
Books
A.A. Petrov and A. Blechman, ‘’Effective Field Theories,’’ Singapore: World Scientific (2016).
C.P. Burgess, ‘’Introduction to Effective Field Theory,‘’ Cambridge University Press (2020).
External links
Effective field theory (Interactions, Symmetry Breaking and Effective Fields - from Quarks to Nuclei. an Internet Lecture by Jacek Dobaczewski)
Quantum field theory
Statistical mechanics
Renormalization group
Chemical physics
Nuclear physics
Condensed matter physics | Effective field theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,411 | [
"Quantum field theory",
"Physical phenomena",
"Matter",
"Applied and interdisciplinary physics",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Nuclear physics",
"Statistical mechanics",
"Chem... |
681,895 | https://en.wikipedia.org/wiki/Quantum%20geometry | In theoretical physics, quantum geometry is the set of mathematical concepts that generalize geometry to describe physical phenomena at distance scales comparable to the Planck length. At such distances, quantum mechanics has a profound effect on physical phenomena.
Quantum gravity
Each theory of quantum gravity uses the term "quantum geometry" in a slightly different fashion. String theory, a leading candidate for a quantum theory of gravity, uses it to describe exotic phenomena such as T-duality and other geometric dualities, mirror symmetry, topology-changing transitions, minimal possible distance scale, and other effects that challenge intuition. More technically, quantum geometry refers to the shape of a spacetime manifold as experienced by D-branes, which includes quantum corrections to the metric tensor, such as the worldsheet instantons. For example, the quantum volume of a cycle is computed from the mass of a brane wrapped on this cycle.
In an alternative approach to quantum gravity called loop quantum gravity (LQG), the phrase "quantum geometry" usually refers to the formalism within LQG where the observables that capture the information about the geometry are well-defined operators on a Hilbert space. In particular, certain physical observables, such as the area, have a discrete spectrum. LQG is non-commutative.
It is possible (but considered unlikely) that this strictly quantized understanding of geometry is consistent with the quantum picture of geometry arising from string theory.
Another approach, which tries to reconstruct the geometry of space-time from "first principles" is Discrete Lorentzian quantum gravity.
Quantum states as differential forms
Differential forms are used to express quantum states, using the wedge product:
where the position vector is
the differential volume element is
and are an arbitrary set of coordinates, the upper indices indicate contravariance, lower indices indicate covariance, so explicitly the quantum state in differential form is:
The overlap integral is given by:
in differential form this is
The probability of finding the particle in some region of space is given by the integral over that region:
provided the wave function is normalized. When is all of 3d position space, the integral must be if the particle exists.
Differential forms are an approach for describing the geometry of curves and surfaces in a coordinate independent way. In quantum mechanics, idealized situations occur in rectangular Cartesian coordinates, such as the potential well, particle in a box, quantum harmonic oscillator, and more realistic approximations in spherical polar coordinates such as electrons in atoms and molecules. For generality, a formalism which can be used in any coordinate system is useful.
See also
Noncommutative geometry
Quantum spacetime
References
Further reading
Supersymmetry, Demystified, P. Labelle, McGraw-Hill (USA), 2010,
Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004,
Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006,
Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008,
External links
Space and Time: From Antiquity to Einstein and Beyond
Quantum Geometry and its Applications
Quantum gravity
Quantum mechanics
Mathematical physics | Quantum geometry | [
"Physics",
"Mathematics"
] | 645 | [
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Mathematical physics",
"Physics beyond the Standard Model"
] |
681,962 | https://en.wikipedia.org/wiki/Coupling%20constant | In physics, a coupling constant or gauge coupling parameter (or, more simply, a coupling), is a number that determines the strength of the force exerted in an interaction. Originally, the coupling constant related the force acting between two static bodies to the "charges" of the bodies (i.e. the electric charge for electrostatic and the mass for Newtonian gravity) divided by the distance squared, , between the bodies; thus: in for Newtonian gravity and in for electrostatic. This description remains valid in modern physics for linear theories with static bodies and massless force carriers.
A modern and more general definition uses the Lagrangian (or equivalently the Hamiltonian ) of a system. Usually, (or ) of a system describing an interaction can be separated into a kinetic part and an interaction part : (or ).
In field theory, always contains 3 fields terms or more, expressing for example that an initial electron (field 1) interacts with a photon (field 2) producing the final state of the electron (field 3). In contrast, the kinetic part always contains only two fields, expressing the free propagation of an initial particle (field 1) into a later state (field 2).
The coupling constant determines the magnitude of the part with respect to the part (or between two sectors of the interaction part if several fields that couple differently are present). For example, the electric charge of a particle is a coupling constant that characterizes an interaction with two charge-carrying fields and one photon field (hence the common Feynman diagram with two arrows and one wavy line). Since photons mediate the electromagnetic force, this coupling determines how strongly electrons feel such a force, and has its value fixed by experiment. By looking at the QED Lagrangian, one sees that indeed, the coupling sets the proportionality between the kinetic term and the interaction term .
A coupling plays an important role in dynamics. For example, one often sets up hierarchies of approximation based on the importance of various coupling constants. In the motion of a large lump of magnetized iron, the magnetic forces may be more important than the gravitational forces because of the relative magnitudes of the coupling constants. However, in classical mechanics, one usually makes these decisions directly by comparing forces. Another important example of the central role played by coupling constants is that they are the expansion parameters for first-principle calculations based on perturbation theory, which is the main method of calculation in many branches of physics.
Fine-structure constant
Couplings arise naturally in a quantum field theory. A special role is played in relativistic quantum theories by couplings that are dimensionless; i.e., are pure numbers. An example of such a dimensionless constant is the fine-structure constant,
where is the charge of an electron, is the permittivity of free space, is the reduced Planck constant and is the speed of light. This constant is proportional to the square of the coupling strength of the charge of an electron to the electromagnetic field.
Gauge coupling
In a non-abelian gauge theory, the gauge coupling parameter, , appears in the Lagrangian as
(where G is the gauge field tensor) in some conventions. In another widely used convention, G is rescaled so that the coefficient of the kinetic term is 1/4 and appears in the covariant derivative. This should be understood to be similar to a dimensionless version of the elementary charge defined as
Weak and strong coupling
In a quantum field theory with a coupling g, if g is much less than 1, the theory is said to be weakly coupled. In this case, it is well described by an expansion in powers of g, called perturbation theory. If the coupling constant is of order one or larger, the theory is said to be strongly coupled. An example of the latter is the hadronic theory of strong interactions (which is why it is called strong in the first place). In such a case, non-perturbative methods need to be used to investigate the theory.
In quantum field theory, the dimension of the coupling plays an important role in the renormalizability property of the theory, and therefore on the applicability of perturbation theory. If the coupling is dimensionless in the natural units system (i.e. , ), like in QED, QCD, and the weak interaction, the theory is renormalizable and all the terms of the expansion series are finite (after renormalization). If the coupling is dimensionful, as e.g. in gravity (), the Fermi theory () or the chiral perturbation theory of the strong force (), then the theory is usually not renormalizable. Perturbation expansions in the coupling might still be feasible, albeit within limitations, as most of the higher order terms of the series will be infinite.
Running coupling
One may probe a quantum field theory at short times or distances by changing the wavelength or momentum, k, of the probe used. With a high frequency (i.e., short time) probe, one sees virtual particles taking part in every process. This apparent violation of the conservation of energy may be understood heuristically by examining the uncertainty relation
which virtually allows such violations at short times.
The foregoing remark only applies to some formulations of quantum field theory, in particular, canonical quantization in the interaction picture.
In other formulations, the same event is described by "virtual" particles going off the mass shell. Such processes renormalize the coupling and make it dependent on the energy scale, μ, at which one probes the coupling. The dependence of a coupling g(μ) on the energy-scale is known as "running of the coupling". The theory of the running of couplings is given by the renormalization group, though it should be kept in mind that the renormalization group is a more general concept describing any sort of scale variation in a physical system (see the full article for details).
Phenomenology of the running of a coupling
The renormalization group provides a formal way to derive the running of a coupling, yet the phenomenology underlying that running can be understood intuitively. As explained in the introduction, the coupling constant sets the magnitude of a force which behaves with distance as . The -dependence was first explained by Faraday as the decrease of the force flux: at a point B distant by from the body A generating a force, this one is proportional to the field flux going through an elementary surface S perpendicular to the line AB. As the flux spreads uniformly through space, it decreases according to the solid angle sustaining the surface S. In the modern view of quantum field theory, the comes from the expression in position space of the propagator of the force carriers. For relatively weakly-interacting bodies, as is generally the case in electromagnetism or gravity or the nuclear interactions at short distances, the exchange of a single force carrier is a good first approximation of the interaction between the bodies, and classically the interaction will obey a -law (note that if the force carrier is massive, there is an additional dependence). When the interactions are more intense (e.g. the charges or masses are larger, or is smaller) or happens over briefer time spans (smaller ), more force carriers are involved or particle pairs are created, see Fig. 1, resulting in the break-down of the behavior. The classical equivalent is that the field flux does not propagate freely in space any more but e.g. undergoes screening from the charges of the extra virtual particles, or interactions between these virtual particles. It is convenient to separate the first-order law from this extra -dependence. This latter is then accounted for by being included in the coupling, which then becomes -dependent, (or equivalently μ-dependent). Since the additional particles involved beyond the single force carrier approximation are always virtual, i.e. transient quantum field fluctuations, one understands why the running of a coupling is a genuine quantum and relativistic phenomenon, namely an effect of the high-order Feynman diagrams on the strength of the force.
Since a running coupling effectively accounts for microscopic quantum effects, it is often called an effective coupling, in contrast to the bare coupling (constant) present in the Lagrangian or Hamiltonian.
Beta functions
In quantum field theory, a beta function, β(g), encodes the running of a coupling parameter, g. It is defined by the relation
where μ is the energy scale of the given physical process. If the beta functions of a quantum field theory vanish, then the theory is scale-invariant.
The coupling parameters of a quantum field theory can flow even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale-invariance is anomalous.
QED and the Landau pole
If a beta function is positive, the corresponding coupling increases with increasing energy. An example is quantum electrodynamics (QED), where one finds by using perturbation theory that the beta function is positive. In particular, at low energies, , whereas at the scale of the Z boson, about 90 GeV, one measures .
Moreover, the perturbative beta function tells us that the coupling continues to increase, and QED becomes strongly coupled at high energy. In fact the coupling apparently becomes infinite at some finite energy. This phenomenon was first noted by Lev Landau, and is called the Landau pole. However, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the Landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid. The true scaling behaviour of at large energies is not known.
QCD and asymptotic freedom
In non-abelian gauge theories, the beta function can be negative, as first found by Frank Wilczek, David Politzer and David Gross. An example of this is the beta function for quantum chromodynamics (QCD), and as a result the QCD coupling decreases at high energies.
Furthermore, the coupling decreases logarithmically, a phenomenon known as asymptotic freedom (the discovery of which was awarded with the Nobel Prize in Physics in 2004). The coupling decreases approximately as
where is the energy of the process involved and β0 is a constant first computed by Wilczek, Gross and Politzer.
Conversely, the coupling increases with decreasing energy. This means that the coupling becomes large at low energies, and one can no longer rely on perturbation theory. Hence, the actual value of the coupling constant is only defined at a given energy scale. In QCD, the Z boson mass scale is typically chosen, providing a value of the strong coupling constant of αs(MZ2 ) = 0.1179 ± 0.0010. In 2023 Atlas measured the most precise so far. The most precise measurements stem from lattice QCD calculations, studies of tau-lepton decay, as well as by the reinterpretation of the transverse momentum spectrum of the Z boson.
QCD scale
In quantum chromodynamics (QCD), the quantity Λ is called the QCD scale. The value is
for three "active" quark flavors, viz when the energy–momentum involved in the process allows production of only the up, down and strange quarks, but not the heavier quarks. This corresponds to energies below 1.275 GeV. At higher energy, Λ is smaller, e.g. MeV above the bottom quark mass of about 5 GeV. The meaning of the minimal subtraction (MS) scheme scale ΛMS is given in the article on dimensional transmutation. The proton-to-electron mass ratio is primarily determined by the QCD scale.
String theory
A remarkably different situation exists in string theory since it includes a dilaton. An analysis of the string spectrum shows that this field must be present, either in the bosonic string or the NS–NS sector of the superstring. Using vertex operators, it can be seen that exciting this field is equivalent to adding a term to the action where a scalar field couples to the Ricci scalar. This field is therefore an entire function worth of coupling constants. These coupling constants are not pre-determined, adjustable, or universal parameters; they depend on space and time in a way that is determined dynamically. Sources that describe the string coupling as if it were fixed are usually referring to the vacuum expectation value. This is free to have any value in the bosonic theory where there is no superpotential.
See also
Canonical quantization, renormalization and dimensional regularization
Quantum field theory, especially quantum electrodynamics and quantum chromodynamics
Gluon field, Gluon field strength tensor
References
External links
The Nobel Prize in Physics 2004 – Information for the Public
Department of Physics and Astronomy of the Georgia State University – Coupling Constants for the Fundamental Forces
An introduction to quantum field theory, by M.E.Peskin and H.D.Schroeder,
Quantum field theory
Quantum mechanics
Statistical mechanics
Renormalization group | Coupling constant | [
"Physics"
] | 2,734 | [
"Quantum field theory",
"Physical phenomena",
"Theoretical physics",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics"
] |
682,635 | https://en.wikipedia.org/wiki/Wavelength-dispersive%20X-ray%20spectroscopy | Wavelength-dispersive X-ray spectroscopy (WDXS or WDS) is a non-destructive analysis technique used to obtain elemental information about a range of materials by measuring characteristic x-rays within a small wavelength range. The technique generates a spectrum in which the peaks correspond to specific x-ray lines and elements can be easily identified. WDS is primarily used in chemical analysis, wavelength dispersive X-ray fluorescence (WDXRF) spectrometry, electron microprobes, scanning electron microscopes, and high precision experiments for testing atomic and plasma physics.
Theory
Wavelength-dispersive X-ray spectroscopy is based on known principles of how the characteristic x-rays are generated by a sample and how the x-rays are measured.
X-ray generation
X-rays are generated when an electron beam of high enough energy dislodges an electron from an inner orbital within an atom or ion, creating a void. This void is filled when an electron from a higher orbital releases energy and drops down to replace the dislodged electron. The energy difference between the two orbitals is characteristic of the electron configuration of the atom or ion and can be used to identify the atom or ion.
X-ray measurement
According to Bragg's law, when an X-ray beam of wavelength "λ" strikes the surface of a crystal at an angle "Θ" and the crystal has atomic lattice planes a distance "d" apart, then constructive interference will result in a beam of diffracted x-rays that will be emitted from the crystal at angle "Θ" if
nλ = 2d sin Θ, where n is an integer.
This means that a crystal with a known lattice size will deflect a beam of x-rays from a specific type of sample at a pre-determined angle. The x-ray beam can be measured by placing a detector (usually a scintillation counter or a proportional counter) in the path of the deflected beam and, since each element has a distinctive x-ray wavelength, multiple elements can be determined by having multiple crystals and multiple detectors.
To improve accuracy the x-ray beams are usually collimated by parallel copper blades called a Söller collimator. The single crystal, the specimen, and the detector are mounted precisely on a goniometer with the distance between the specimen and the crystal equal to the distance between the crystal and the detector. It is usually operated under vacuum to reduce the absorption of soft radiation (low-energy photons) by the air and thus increase the sensitivity for the detection and quantification of light elements (between boron and oxygen). The technique generates a spectrum with peaks corresponding to x-ray lines. This is compared with reference spectra to determine the elemental composition of the sample.
As the atomic number of the element increases so there are more possible electrons at different energy levels that can be ejected resulting in x-rays with different wavelengths. This creates spectra with multiple lines, one for each energy level. The largest peak in the spectrum is labelled Kα, the next Kβ, and so on.
Applications
Applications include analysis of catalysts, cement, food, metals, mining and mineral samples, petroleum, plastics, semiconductors, and wood.
Limitations
Analysis is generally limited to a very small area of the sample, although modern automated equipment often use grid patterns for larger analysis areas.
The technique cannot distinguish between isotopes of elements as the electron configuration of isotopes of an element are identical.
It cannot measure the valence state of the element, for example Fe2+ vs Fe3+.
In certain elements, the Kα line might overlap the Kβ of another element and hence if the first element is present, the second element cannot be reliably detected (for example VKα overlaps TiKβ)
See also
X-ray spectroscopy
References
Emission spectroscopy
X-ray spectroscopy | Wavelength-dispersive X-ray spectroscopy | [
"Physics",
"Chemistry"
] | 792 | [
"Emission spectroscopy",
"X-ray spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
682,642 | https://en.wikipedia.org/wiki/Energy-dispersive%20X-ray%20spectroscopy | Energy-dispersive X-ray spectroscopy (EDS, EDX, EDXS or XEDS), sometimes called energy dispersive X-ray analysis (EDXA or EDAX) or energy dispersive X-ray microanalysis (EDXMA), is an analytical technique used for the elemental analysis or chemical characterization of a sample. It relies on an interaction of some source of X-ray excitation and a sample. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing a unique set of peaks on its electromagnetic emission spectrum (which is the main principle of spectroscopy). The peak positions are predicted by the Moseley's law with accuracy much better than experimental resolution of a typical EDX instrument.
To stimulate the emission of characteristic X-rays from a specimen a beam of electrons or X-ray is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy-dispersive spectrometer. As the energies of the X-rays are characteristic of the difference in energy between the two shells and of the atomic structure of the emitting element, EDS allows the elemental composition of the specimen to be measured.
Equipment
Four primary components of the EDS setup are
the excitation source (electron beam or x-ray beam)
the X-ray detector
the pulse processor
the analyzer.
Electron beam excitation is used in electron microscopes, scanning electron microscopes (SEM) and scanning transmission electron microscopes (STEM). X-ray beam excitation is used in X-ray fluorescence (XRF) spectrometers. A detector is used to convert X-ray energy into voltage signals; this information is sent to a pulse processor, which measures the signals and passes them onto an analyzer for data display and analysis. The most common detector used to be a Si(Li) detector cooled to cryogenic temperatures with liquid nitrogen. Now, newer systems are often equipped with silicon drift detectors (SDD) with Peltier cooling systems.
Hazards and Safety
High Voltage: SEM-EDX operates at high voltages (typically several kilovolts), which can pose a risk of electric shock.
X-ray Radiation: While SEM-EDX does not use as high a voltage as some X-ray techniques, it still produces X-rays that can be harmful with prolonged exposure. Proper shielding and safety measures are necessary.
Sample Preparation: Handling and preparation of samples can involve hazardous chemicals or materials. Proper personal protective equipment (PPE) should be used.
Vacuum System: The vacuum system used in SEM-EDX can implode if not properly maintained, leading to potential hazards.
Cryogenic Hazards: Some samples may require cryogenic techniques for analysis, which can pose risks of cold burns or asphyxiation if not appropriately handled.
Mechanical Hazards: If used incorrectly, moving parts in the SEM can cause injury.
Fire and Explosion Risks: Some samples, particularly those involving flammable materials, can pose fire or explosion risks under vacuum conditions.
Ergonomic Risks: Prolonged use of SEM-EDX can lead to ergonomic hazards if the workstation is not correctly set up for the user's comfort and safety.
Technological variants
The excess energy of the electron that migrates to an inner shell to fill the newly created hole can do more than emit an X-ray. Often, instead of X-ray emission, the excess energy is transferred to a third electron from a further outer shell, prompting its ejection. This ejected species is called an Auger electron, and the method for its analysis is known as Auger electron spectroscopy (AES).
X-ray photoelectron spectroscopy (XPS) is another close relative of EDS, utilizing ejected electrons in a manner similar to that of AES. Information on the quantity and kinetic energy of ejected electrons is used to determine the binding energy of these now-liberated electrons, which is element-specific and allows chemical characterization of a sample.
EDS is often contrasted with its spectroscopic counterpart, wavelength dispersive X-ray spectroscopy (WDS). WDS differs from EDS in that it uses the diffraction of X-rays on special crystals to separate its raw data into spectral components (wavelengths). WDS has a much finer spectral resolution than EDS. WDS also avoids the problems associated with artifacts in EDS (false peaks, noise from the amplifiers, and microphonics).
A high-energy beam of charged particles such as electrons or protons can be used to excite a sample rather than X-rays. This is called particle-induced X-ray emission or PIXE.
Accuracy
EDS can be used to determine which chemical elements are present in a sample, and can be used to estimate their relative abundance. EDS also helps to measure multi-layer coating thickness of metallic coatings and analysis of various alloys. The accuracy of this quantitative analysis of sample composition is affected by various factors. Many elements will have overlapping X-ray emission peaks (e.g., Ti Kβ and V Kα, Mn Kβ and Fe Kα). The accuracy of the measured composition is also affected by the nature of the sample. X-rays are generated by any atom in the sample that is sufficiently excited by the incoming beam. These X-rays are emitted in all directions (isotropically), and so they may not all escape the sample. The likelihood of an X-ray escaping the specimen, and thus being available to detect and measure, depends on the energy of the X-ray and the composition, amount, and density of material it has to pass through to reach the detector. Because of this X-ray absorption effect and similar effects, accurate estimation of the sample composition from the measured X-ray emission spectrum requires the application of quantitative correction procedures, which are sometimes referred to as matrix corrections.
Emerging technology
There is a trend towards a newer EDS detector, called the silicon drift detector (SDD). The SDD consists of a high-resistivity silicon chip where electrons are driven to a small collecting anode. The advantage lies in the extremely low capacitance of this anode, thereby utilizing shorter processing times and allowing very high throughput. Benefits of the SDD include:
High count rates and processing,
Better resolution than traditional Si(Li) detectors at high count rates,
Lower dead time (time spent on processing X-ray event),
Faster analytical capabilities and more precise X-ray maps or particle data collected in seconds,
Ability to be stored and operated at relatively high temperatures, eliminating the need for liquid nitrogen cooling.
Because the capacitance of the SDD chip is independent of the active area of the detector, much larger SDD chips can be utilized (40 mm2 or more). This allows for even higher count rate collection. Further benefits of large area chips include:
Minimizing SEM beam current allowing for optimization of imaging under analytical conditions,
Reduced sample damage and
Smaller beam interaction and improved spatial resolution for high speed maps.
Where the X-ray energies of interest are in excess of ~ 30 keV, traditional silicon-based technologies suffer from poor quantum efficiency due to a reduction in the detector stopping power. Detectors produced from high density semiconductors such as cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) have improved efficiency at higher X-ray energies and are capable of room temperature operation. Single element systems, and more recently pixelated imaging detectors such as the high energy X-ray imaging technology (HEXITEC) system, are capable of achieving energy resolutions of the order of 1% at 100 keV.
In recent years, a different type of EDS detector, based upon a superconducting microcalorimeter, has also become commercially available. This new technology combines the simultaneous detection capabilities of EDS with the high spectral resolution of WDS. The EDS microcalorimeter consists of two components: an absorber, and a superconducting transition-edge sensor (TES) thermometer. The former absorbs X-rays emitted from the sample and converts this energy into heat; the latter measures the subsequent change in temperature due to the influx of heat. The EDS microcalorimeter has historically suffered from a number of drawbacks, including low count rates and small detector areas. The count rate is hampered by its reliance on the time constant of the calorimeter's electrical circuit. The detector area must be small in order to keep the heat capacity small and maximize thermal sensitivity (resolution). However, the count rate and detector area have been improved by the implementation of arrays of hundreds of superconducting EDS microcalorimeters, and the importance of this technology is growing.
See also
Elemental mapping
Scanning electron microscopy
Transmission electron microscopy
X-ray microtomography
References
External links
MICROANALYST.NET – Information portal with X-ray microanalysis and EDX contents
Learn how to do EDS in an SEM – an interactive learning environment provided by Microscopy Australia
Scientific techniques
Measuring instruments
X-ray spectroscopy
X-rays | Energy-dispersive X-ray spectroscopy | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,978 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Measuring instruments",
"X-ray spectroscopy",
"Spectroscopy"
] |
83,909 | https://en.wikipedia.org/wiki/Graham%27s%20law | Graham's law of effusion (also called Graham's law of diffusion) was formulated by Scottish physical chemist Thomas Graham in 1848. Graham found experimentally that the rate of effusion of a gas is inversely proportional to the square root of the molar mass of its particles. This formula is stated as:
,
where:
Rate1 is the rate of effusion for the first gas. (volume or number of moles per unit time).
Rate2 is the rate of effusion for the second gas.
M1 is the molar mass of gas 1
M2 is the molar mass of gas 2.
Graham's law states that the rate of diffusion or of effusion of a gas is inversely proportional to the square root of its molecular weight. Thus, if the molecular weight of one gas is four times that of another, it would diffuse through a porous plug or escape through a small pinhole in a vessel at half the rate of the other (heavier gases diffuse more slowly). A complete theoretical explanation of Graham's law was provided years later by the kinetic theory of gases. Graham's law provides a basis for separating isotopes by diffusion—a method that came to play a crucial role in the development of the atomic bomb.
Graham's law is most accurate for molecular effusion which involves the movement of one gas at a time through a hole. It is only approximate for diffusion of one gas in another or in air, as these processes involve the movement of more than one gas.
In the same conditions of temperature and pressure, the molar mass is proportional to the mass density. Therefore, the rates of diffusion of different gases are inversely proportional to the square roots of their mass densities:
where:
ρ is the mass density.
Examples
First Example: Let gas 1 be H2 and gas 2 be O2. (This example is solving for the ratio between the rates of the two gases)
Therefore, hydrogen molecules effuse four times faster than those of oxygen.
Graham's law can also be used to find the approximate molecular weight of a gas if one gas is a known species, and if there is a specific ratio between the rates of two gases (such as in the previous example). The equation can be solved for the unknown molecular weight.
Graham's law was the basis for separating uranium-235 from uranium-238 found in natural uraninite (uranium ore) during the Manhattan Project to build the first atomic bomb. The United States government built a gaseous diffusion plant at the Clinton Engineer Works in Oak Ridge, Tennessee, at the cost of $479 million (equivalent to $ in ). In this plant, uranium from uranium ore was first converted to uranium hexafluoride and then forced repeatedly to diffuse through porous barriers, each time becoming a little more enriched in the slightly lighter uranium-235 isotope.
Second Example: An unknown gas diffuses 0.25 times as fast as He. What is the molar mass of the unknown gas?
Using the formula of gaseous diffusion, we can set up this equation.
Which is the same as the following because the problem states that the rate of diffusion of the unknown gas relative to the helium gas is 0.25.
Rearranging the equation results in
History
Graham's research on the diffusion of gases was triggered by his reading about the observations of German chemist Johann Döbereiner that hydrogen gas diffused out of a small crack in a glass bottle faster than the surrounding air diffused in to replace it. Graham measured the rate of diffusion of gases through plaster plugs, through very fine tubes, and through small orifices. In this way he slowed down the process so that it could be studied quantitatively. He first stated in 1831 that the rate of effusion of a gas is inversely proportional to the square root of its density, and later in 1848 showed that this rate is inversely proportional to the square root of the molar mass. Graham went on to study the diffusion of substances in solution and in the process made the discovery that some apparent solutions actually are suspensions of particles too large to pass through a parchment filter. He termed these materials colloids, a term that has come to denote an important class of finely divided materials.
Around the time Graham did his work, the concept of molecular weight was being established largely through the measurements of gases. Daniel Bernoulli suggested in 1738 in his book Hydrodynamica that heat increases in proportion to the velocity, and thus kinetic energy, of gas particles. Italian physicist Amedeo Avogadro also suggested in 1811 that equal volumes of different gases contain equal numbers of molecules. Thus, the relative molecular weights of two gases are equal to the ratio of weights of equal volumes of the gases. Avogadro's insight together with other studies of gas behaviour provided a basis for later theoretical work by Scottish physicist James Clerk Maxwell to explain the properties of gases as collections of small particles moving through largely empty space.
Perhaps the greatest success of the kinetic theory of gases, as it came to be called, was the discovery that for gases, the temperature as measured on the Kelvin (absolute) temperature scale is directly proportional to the average kinetic energy of the gas molecules. Graham's law for diffusion could thus be understood as a consequence of the molecular kinetic energies being equal at the same temperature.
The rationale of the above can be summed up as follows:
Kinetic energy of each type of particle (in this example, Hydrogen and Oxygen, as above) within the system is equal, as defined by thermodynamic temperature:
Which can be simplified and rearranged to:
or:
Ergo, when constraining the system to the passage of particles through an area, Graham's law appears as written at the start of this article.
See also
Sieverts' law
Henry's law
Gas laws
Scientific laws named after people
Viscosity
Drag (physics)
Vapour Density
References
Eponymous laws of physics
Gas laws | Graham's law | [
"Chemistry"
] | 1,221 | [
"Gas laws"
] |
84,026 | https://en.wikipedia.org/wiki/Metre%20%28music%29 | In music, metre (British spelling) or meter (American spelling) refers to regularly recurring patterns and accents such as bars and beats. Unlike rhythm, metric onsets are not necessarily sounded, but are nevertheless implied by the performer (or performers) and expected by the listener.
A variety of systems exist throughout the world for organising and playing metrical music, such as the Indian system of tala and similar systems in Arabic and African music.
Western music inherited the concept of metre from poetry, where it denotes the number of lines in a verse, the number of syllables in each line, and the arrangement of those syllables as long or short, accented or unaccented. The first coherent system of rhythmic notation in modern Western music was based on rhythmic modes derived from the basic types of metrical unit in the quantitative metre of classical ancient Greek and Latin poetry.
Later music for dances such as the pavane and galliard consisted of musical phrases to accompany a fixed sequence of basic steps with a defined tempo and time signature. The English word "measure", originally an exact or just amount of time, came to denote either a poetic rhythm, a bar of music, or else an entire melodic verse or dance involving sequences of notes, words, or movements that may last four, eight or sixteen bars.
Metre is related to and distinguished from pulse, rhythm (grouping), and beats:
Metric structure
The term metre is not very precisely defined. Stewart MacPherson preferred to speak of "time" and "rhythmic shape", while Imogen Holst preferred "measured rhythm". However, Justin London has written a book about musical metre, which "involves our initial perception as well as subsequent anticipation of a series of beats that we abstract from the rhythm surface of the music as it unfolds in time". This "perception" and "abstraction" of rhythmic bar is the foundation of human instinctive musical participation, as when we divide a series of identical clock-ticks into "tick–tock–tick–tock". "Rhythms of recurrence" arise from the interaction of two levels of motion, the faster providing the pulse and the slower organizing the beats into repetitive groups. In his book The Rhythms of Tonal Music, Joel Lester notes that, "[o]nce a metric hierarchy has been established, we, as listeners, will maintain that organization as long as minimal evidence is present".
"Meter may be defined as a regular, recurring pattern of strong and weak beats. This recurring pattern of durations is identified at the beginning of a composition by a meter signature (time signature). ... Although meter is generally indicated by time signatures, it is important to realize that meter is not simply a matter of notation". A definition of musical metre requires the possibility of identifying a repeating pattern of accented pulses – a "pulse-group" – which corresponds to the foot in poetry. Frequently a pulse-group can be identified by taking the accented beat as the first pulse in the group and counting the pulses until the next accent.
Frequently metres can be subdivided into a pattern of duples and triples.
For example, a metre consists of three units of a pulse group, and a metre consists of two units of a pulse group. In turn, metric bars may comprise 'metric groups' - for example, a musical phrase or melody might consist of two bars x .
The level of musical organisation implied by musical metre includes the most elementary levels of musical form. Metrical rhythm, measured rhythm, and free rhythm are general classes of rhythm and may be distinguished in all aspects of temporality:
Metrical rhythm, by far the most common class in Western music, is where each time value is a multiple or fraction of a fixed unit (beat, see paragraph below), and normal accents reoccur regularly, providing systematic grouping (bars, divisive rhythm).
Measured rhythm is where each time value is a multiple or fraction of a specified time unit but there are not regularly recurring accents (additive rhythm).
Free rhythm is where the time values are not based on any fixed unit; since the time values lack a fixed unit, regularly recurring accents are no longer a possibility.
Some music, including chant, has freer rhythm, like the rhythm of prose compared to that of verse. Some music, such as some graphically scored works since the 1950s and non-European music such as Honkyoku repertoire for shakuhachi, may be considered ametric. The music term senza misura is Italian for "without metre", meaning to play without a beat, using time (e.g. seconds elapsed on an ordinary clock) if necessary to determine how long it will take to play the bar.
Metric structure includes metre, tempo, and all rhythmic aspects that produce temporal regularity or structure, against which the foreground details or durational patterns of any piece of music are projected. Metric levels may be distinguished: the beat level is the metric level at which pulses are heard as the basic time unit of the piece. Faster levels are division levels, and slower levels are multiple levels. A rhythmic unit is a durational pattern which occupies a period of time equivalent to a pulse or pulses on an underlying metric level.
Frequently encountered types of metre
Metres classified by the number of beats per measure
Duple and quadruple metre
In duple metre, each measure is divided into two beats, or a multiple thereof (quadruple metre).
For example, in the time signature , each bar contains two (2) quarter-note (4) beats. In the time signature , each bar contains two dotted-quarter-note beats.
{|
|
|
|}
Corresponding quadruple metres are , which has four quarter-note beats per measure, and , which has four dotted-quarter-note beats per bar.
{|
|
|
|}
Triple metre
Triple metre is a metre in which each bar is divided into three beats, or a multiple thereof. For example, in the time signature , each bar contains three (3) quarter-note (4) beats, and with a time signature of , each bar contains three dotted-quarter beats.
{|
|
|
|}
More than four beats
Metres with more than four beats are called quintuple metres (5), sextuple metres (6), septuple metres (7), etc.
In classical music theory it is presumed that only divisions of two or three are perceptually valid, so a metre not divisible by 2 or 3, such as quintuple metre, say , is assumed to either be equivalent to a measure of followed by a measure of , or the opposite: then . Higher metres which are divisible by 2 or 3 are considered equivalent to groupings of duple or triple metre measures; thus, , for example, is rarely used because it is considered equivalent to two measures of . See: hypermetre and additive rhythm and divisive rhythm.
Higher metres are used more commonly in analysis, if not performance, of cross-rhythms, as lowest number possible which may be used to count a polyrhythm is the lowest common denominator (LCD) of the two or more metric divisions. For example, much African music is recorded in Western notation as being in , the LCD of 4 and 3.
Metres classified by the subdivisions of a beat
Simple metre and compound metre are distinguished by the way the beats are subdivided.
Simple metre
Simple metre (or simple time) is a metre in which each beat of the bar divides naturally into two (as opposed to three) equal parts. The top number in the time signature will be 2, 3, 4, 5, etc.
For example, in the time signature , each bar contains three quarter-note beats, and each of those beats divides into two eighth notes, making it a simple metre. More specifically, it is a simple triple metre because there are three beats in each measure; simple duple (two beats) or simple quadruple (four) are also common metres.
{|
|
|
|
|}
Compound metre
Compound metre (or compound time), is a metre in which each beat of the bar divides naturally into three equal parts. That is, each beat contains a triple pulse. The top number in the time signature will be 6, 9, 12, 15, 18, 24, etc.
Compound metres are written with a time signature that shows the number of divisions of beats in each bar as opposed to the number of beats. For example, compound duple (two beats, each divided into three) is written as a time signature with a numerator of six, for example, . Contrast this with the time signature , which also assigns six eighth notes to each measure, but by convention connotes a simple triple time: 3 quarter-note beats.
Examples of compound metre include (compound duple metre), (compound triple metre), and (compound quadruple metre).
{|
|
|
|
|}
Although and are not to be confused, they use bars of the same length, so it is easy to "slip" between them just by shifting the location of the accents. This interpretational switch has been exploited, for example, by Leonard Bernstein, in the song "America":
Compound metre divided into three parts could theoretically be transcribed into musically equivalent simple metre using triplets. Likewise, simple metre can be shown in compound through duples. In practice, however, this is rarely done because it disrupts conducting patterns when the tempo changes. When conducting in , conductors typically provide two beats per bar; however, all six beats may be performed when the tempo is very slow.
Compound time is associated with "lilting" and dancelike qualities. Folk dances often use compound time. Many Baroque dances are often in compound time: some gigues, the courante, and sometimes the passepied and the siciliana.
Metre in song
The concept of metre in music derives in large part from the poetic metre of song and includes not only the basic rhythm of the foot, pulse-group or figure used but also the rhythmic or formal arrangement of such figures into musical phrases (lines, couplets) and of such phrases into melodies, passages or sections (stanzas, verses) to give what calls "the time pattern of any song".
Traditional and popular songs may draw heavily upon a limited range of metres, leading to interchangeability of melodies. Early hymnals commonly did not include musical notation but simply texts that could be sung to any tune known by the singers that had a matching metre. For example, The Blind Boys of Alabama rendered the hymn "Amazing Grace" to the setting of The Animals' version of the folk song "The House of the Rising Sun". This is possible because the texts share a popular basic four-line (quatrain) verse-form called ballad metre or, in hymnals, common metre, the four lines having a syllable-count of 8–6–8–6 (Hymns Ancient and Modern Revised), the rhyme-scheme usually following suit: ABAB. There is generally a pause in the melody in a cadence at the end of the shorter lines so that the underlying musical metre is 8–8–8–8 beats, the cadences dividing this musically into two symmetrical "normal" phrases of four bars each.
In some regional music, for example Balkan music (like Bulgarian music, and the Macedonian metre), a wealth of irregular or compound metres are used. Other terms for this are "additive metre" and "imperfect time".
Metre in dance music
Metre is often essential to any style of dance music, such as the waltz or tango, that has instantly recognizable patterns of beats built upon a characteristic tempo and bar. The Imperial Society of Teachers of Dancing defines the tango, for example, as to be danced in time at approximately 66 beats per minute. The basic slow step forwards or backwards, lasting for one beat, is called a "slow", so that a full "right–left" step is equal to one bar.
But step-figures such as turns, the corte and walk-ins also require "quick" steps of half the duration, each entire figure requiring 3–6 "slow" beats. Such figures may then be "amalgamated" to create a series of movements that may synchronise to an entire musical section or piece. This can be thought of as an equivalent of prosody (see also: prosody (music)).
Metre in classical music
In music of the common practice period (about 1600–1900), there are four different families of time signature in common use:
Simple duple: two or four beats to a bar, each divided by two, the top number being "2" or "4" (, , ... , , ...). When there are four beats to a bar, it is alternatively referred to as "quadruple" time.
Simple triple: three beats to a bar, each divided by two, the top number being "3" (, , ...)
Compound duple: two beats to a bar, each divided by three, the top number being "6" (, , ...) Similarly compound quadruple, four beats to a bar, each divided by three, the top number being "12" (, , ...)
Compound triple: three beats to a bar, each divided by three, the top number being "9" (, , )
If the beat is divided into two the metre is simple, if divided into three it is compound. If each bar is divided into two it is duple and if into three it is triple. Some people also label quadruple, while some consider it as two duples. Any other division is considered additively, as a bar of five beats may be broken into duple+triple (12123) or triple+duple (12312) depending on accent. However, in some music, especially at faster tempos, it may be treated as one unit of five.
Changing metre
In 20th-century concert music, it became more common to switch metre—the end of Igor Stravinsky's The Rite of Spring (shown below) is an example. This practice is sometimes called mixed metres.
A metric modulation is a modulation from one metric unit or metre to another.
The use of asymmetrical rhythms – sometimes called aksak rhythm (the Turkish word for "limping") – also became more common in the 20th century: such metres include quintuple as well as more complex additive metres along the lines of time, where each bar has two 2-beat units and a 3-beat unit with a stress at the beginning of each unit. Similar metres are often used in Bulgarian folk dances and Indian classical music.
Hypermetre
Hypermetre is large-scale metre (as opposed to smaller-scale metre). Hypermeasures consist of hyperbeats. "Hypermeter is metre, with all its inherent characteristics, at the level where bars act as beats". For example, the four-bar hypermeasures are the prototypical structure for country music, in and against which country songs work. In some styles, two- and four-bar hypermetres are common.
The term was coined, together with "hypermeasures", by Edward T. , who regarded it as applying to a relatively small scale, conceiving of a still larger kind of gestural "rhythm" imparting a sense of "an extended upbeat followed by its downbeat" contends that in terms of multiple and simultaneous levels of metrical "entrainment" (evenly spaced temporal events "that we internalize and come to expect", p. 9), there is no in-principle distinction between metre and hypermetre; instead, they are the same phenomenon occurring at different levels.
and Middleton have described musical metre in terms of deep structure, using generative concepts to show how different metres (, , etc.) generate many different surface rhythms. For example, the first phrase of The Beatles' "A Hard Day's Night", excluding the syncopation on "night", may be generated from its metre of :
{| class="wikitable"
| colspan="5" align="center" width="33%" |
| colspan="2" align="center" width="33%" |
| colspan="2" align="center" width="34%" |
|-
| align="center" |
|colspan=4 align="center"|
| align="center" |
| align="center" |
| align="center" width="17%" |
| align="center" width="17%" |
|-
|
|colspan=2 align="center"|
|colspan=2 align="center"|
|
|
|
|
|-
|
| align="center" |
| align="center" |
| align="center" |
| align="center" |
|
|
|
|
|-
| align="center" |
| align="center" |
| align="center" |
| align="center" |
| align="center" |
| align="center" |
| align="center" |
| align="center" |
|
|-
|
|
| align="center" |It's
| align="center" |been
| align="center" |a
| align="center" |hard
| align="center" |day's
| align="center" |night...
|
|}
The syncopation may then be added, moving "night" forward one eighth note, and the first phrase is generated.
Polymetre
With polymetre, the bar sizes differ, but the beat remains constant. Since the beat is the same, the various metres eventually agree. (Four bars of = seven bars of ). An example is the second moment, titled "Scherzo polimetrico", of Edmund Rubbra's Second String Quartet (1951), in which a constant triplet texture holds together overlapping bars of , , and , and barlines rarely coincide in all four instruments.
With polyrhythm, the number of beats varies within a fixed bar length. For example, in a 4:3 polyrhythm, one part plays while the other plays , but the beats are stretched so that three beats of are played in the same time as four beats of . More generally, sometimes rhythms are combined in a way that is neither tactus nor bar preserving—the beat differs and the bar size also differs. See Polytempi.
Research into the perception of polymetre shows that listeners often either extract a composite pattern that is fitted to a metric framework, or focus on one rhythmic stream while treating others as "noise". This is consistent with the Gestalt psychology tenet that "the figure–ground dichotomy is fundamental to all perception". In the music, the two metres will meet each other after a specific number of beats. For example, a metre and metre will meet after 12 beats.
In "Toads of the Short Forest" (from the album Weasels Ripped My Flesh), composer Frank Zappa explains: "At this very moment on stage we have drummer A playing in , drummer B playing in , the bass playing in , the organ playing in , the tambourine playing in , and the alto sax blowing his nose". "Touch And Go", a hit single by The Cars, has polymetric verses, with the drums and bass playing in , while the guitar, synthesizer, and vocals are in (the choruses are entirely in ). Magma uses extensively on (e.g. Mëkanïk Dëstruktïẁ Kömmandöh) and some other combinations. King Crimson's albums of the eighties have several songs that use polymetre of various combinations.
Polymetres are a defining characteristic of the music of Meshuggah, whose compositions often feature unconventionally timed rhythm figures cycling over a base.
Examples
See also
Metre (hymn)
Metre (poetry)
Hymn tune
List of musical works in unusual time signatures
References
Sources
, chapters "Metre" and "Rhythm"
Further reading
Anon. (1999). "Polymeter." Baker's Student Encyclopedia of Music, 3 vols., ed. Laura Kuhn. New York: Schirmer-Thomson Gale; London: Simon & Schuster. . Online version 2006:
Anon. [2001]. "Polyrhythm". Grove Music Online. (Accessed 4 April 2009)
Hindemith, Paul (1974). Elementary Training for Musicians, second edition (rev. 1949). Mainz, London, and New York: Schott. .
Honing, Henkjan (2002). "Structure and Interpretation of Rhythm and Timing." Tijdschrift voor Muziektheorie 7(3):227–232. (pdf)
Larson, Steve (2006). "Rhythmic Displacement in the Music of Bill Evans". In Structure and Meaning in Tonal Music: Festschrift in Honor of Carl Schachter, edited by L. Poundie Burstein and David Gagné, 103–122. Harmonologia Series, no. 12. Hillsdale, New York: Pendragon Press. .
Waters, Keith (1996). "Blurring the Barline: Metric Displacement in the Piano Solos of Herbie Hancock". Annual Review of Jazz Studies 8:19–37.
Articles containing video clips
Patterns
Cognitive musicology | Metre (music) | [
"Physics"
] | 4,454 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
84,130 | https://en.wikipedia.org/wiki/Centrifugal%20governor | A centrifugal governor is a specific type of governor with a feedback system that controls the speed of an engine by regulating the flow of fuel or working fluid, so as to maintain a near-constant speed. It uses the principle of proportional control.
Centrifugal governors, also known as "centrifugal regulators" and "fly-ball governors", were invented by Christiaan Huygens and used to regulate the distance and pressure between millstones in windmills in the 17th century. In 1788, James Watt adapted one to control his steam engine where it regulates the admission of steam into the cylinder(s), a development that proved so important he is sometimes called the inventor. Centrifugal governors' widest use was on steam engines during the Steam Age in the 19th century. They are also found on stationary internal combustion engines and variously fueled turbines, and in some modern striking clocks.
A simple governor does not maintain an exact speed but a speed range, since under increasing load the governor opens the throttle as the speed (RPM) decreases.
Operation
The devices shown are on steam engines. Power is supplied to the governor from the engine's output shaft by a belt or chain connected to the lower belt wheel. The governor is connected to a throttle valve that regulates the flow of working fluid (steam) supplying the prime mover. As the speed of the prime mover increases, the central spindle of the governor rotates at a faster rate, and the kinetic energy of the balls increases. This allows the two masses on lever arms to move outwards and upwards against gravity. If the motion goes far enough, this motion causes the lever arms to pull down on a thrust bearing, which moves a beam linkage, which reduces the aperture of a throttle valve. The rate of working-fluid entering the cylinder is thus reduced and the speed of the prime mover is controlled, preventing over-speeding.
Mechanical stops may be used to limit the range of throttle motion, as seen near the masses in the image at right.
Non-gravitational regulation
A limitation of the two-arm, two-ball governor is its reliance on gravity, and that the governor must stay upright relative to the surface of the Earth for gravity to retract the balls when the governor slows down.
Governors can be built that do not use gravitational force, by using a single straight arm with weights on both ends, a center pivot attached to a spinning axle, and a spring that tries to force the weights towards the center of the spinning axle. The two weights on opposite ends of the pivot arm counterbalance any gravitational effects, but both weights use centrifugal force to work against the spring and attempt to rotate the pivot arm towards a perpendicular axis relative to the spinning axle.
Spring-retracted non-gravitational governors are commonly used in single-phase alternating current (AC) induction motors to turn off the starting field coil when the motor's rotational speed is high enough.
They are also commonly used in snowmobile and all-terrain vehicle (ATV) continuously variable transmissions (CVT), both to engage/disengage vehicle motion and to vary the transmission's pulley diameter ratio in relation to the engine revolutions per minute.
History
Centrifugal governors were invented by Christiaan Huygens and used to regulate the distance and pressure between millstones in windmills in the 17th century.
James Watt designed his first governor in 1788 following a suggestion from his business partner Matthew Boulton. It was a conical pendulum governor and one of the final series of innovations Watt had employed for steam engines. A giant statue of Watt's governor stands at Smethwick in the English West Midlands.
Uses
Centrifugal governors' widest use was on steam engines during the Steam Age in the 19th century. They are also found on stationary internal combustion engines and variously fueled turbines, and in some modern striking clocks.
Centrifugal governors are used in many modern repeating watches to limit the speed of the striking train, so the repeater does not run too quickly.
Another kind of centrifugal governor consists of a pair of masses on a spindle inside a cylinder, the masses or the cylinder being coated with pads, somewhat like a centrifugal clutch or a drum brake. This is used in a spring-loaded record player and a spring-loaded telephone dial to limit the speed.
Dynamic systems
The centrifugal governor is often used in the cognitive sciences as an example of a dynamic system, in which the representation of information cannot be clearly separated from the operations being applied to the representation. And, because the governor is a servomechanism, its analysis in a dynamic system is not trivial. In 1868, James Clerk Maxwell wrote a famous paper "On Governors" that is widely considered a classic in feedback control theory. Maxwell distinguishes moderators (a centrifugal brake) and governors which control motive power input. He considers devices by James Watt, Professor James Thomson, Fleeming Jenkin, William Thomson, Léon Foucault and Carl Wilhelm Siemens (a liquid governor).
Natural selection
In his famous 1858 paper to the Linnean Society, which led Darwin to publish On the Origin of Species, Alfred Russel Wallace used governors as a metaphor for the evolutionary principle:
The action of this principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow.
The cybernetician and anthropologist Gregory Bateson thought highly of Wallace's analogy and discussed the topic in his 1979 book Mind and Nature: A Necessary Unity, and other scholars have continued to explore the connection between natural selection and systems theory.
Culture
A centrifugal governor is part of the city seal of Manchester, New Hampshire in the US and is also used on the city flag. A 2017 effort to change the design was rejected by voters.
A stylized centrifugal governor is also part of the coat of arms of the Swedish Work Environment Authority.
See also
Cataract (beam engine)
Centrifugal switch
Hit and miss engine
References
External links
British inventions
Control devices
Cybernetics
Inventions by Christiaan Huygens
Mechanical power control
Mechanisms (engineering)
Rotating machines
Scottish inventions
Steam engine governors | Centrifugal governor | [
"Physics",
"Technology",
"Engineering"
] | 1,314 | [
"Machines",
"Control devices",
"Physical systems",
"Rotating machines",
"Control engineering",
"Mechanics",
"Mechanical engineering",
"Mechanical power control",
"Mechanisms (engineering)"
] |
84,139 | https://en.wikipedia.org/wiki/Ketosis | Ketosis is a metabolic state characterized by elevated levels of ketone bodies in the blood or urine. Physiological ketosis is a normal response to low glucose availability. In physiological ketosis, ketones in the blood are elevated above baseline levels, but the body's acid–base homeostasis is maintained. This contrasts with ketoacidosis, an uncontrolled production of ketones that occurs in pathologic states and causes a metabolic acidosis, which is a medical emergency. Ketoacidosis is most commonly the result of complete insulin deficiency in type 1 diabetes or late-stage type 2 diabetes. Ketone levels can be measured in blood, urine or breath and are generally between 0.5 and 3.0 millimolar (mM) in physiological ketosis, while ketoacidosis may cause blood concentrations greater than 10 mM.
Trace levels of ketones are always present in the blood and increase when blood glucose reserves are low and the liver shifts from primarily metabolizing carbohydrates to metabolizing fatty acids. This occurs during states of increased fatty acid oxidation such as fasting, starvation, carbohydrate restriction, or prolonged exercise. When the liver rapidly metabolizes fatty acids into acetyl-CoA, some acetyl-CoA molecules can then be converted into ketone bodies: pyruvate, acetoacetate, beta-hydroxybutyrate, and acetone. These ketone bodies can function as an energy source as well as signalling molecules. The liver itself cannot utilize these molecules for energy, so the ketone bodies are released into the blood for use by peripheral tissues including the brain.
When ketosis is induced by carbohydrate restriction, it is sometimes referred to as nutritional ketosis. A low-carbohydrate, moderate protein diet that can lead to ketosis is called a ketogenic diet. Ketosis is well-established as a treatment for epilepsy and is also effective in treating type 2 diabetes.
Definitions
Normal serum levels of ketone bodies are less than 0.5 mM. Hyperketonemia is conventionally defined as levels in excess of 1 mM.
Physiological ketosis
Physiological ketosis is the non-pathological (normal functioning) elevation of ketone bodies that can result from any state of increased fatty acid oxidation including fasting, prolonged exercise, or very low-carbohydrate diets such as the ketogenic diet. In physiological ketosis, serum ketone levels generally remain below 3 mM.
Ketoacidosis
Ketoacidosis is a pathological state of uncontrolled production of ketones that results in a metabolic acidosis, with serum ketone levels typically in excess of 3 mM. Ketoacidosis is most commonly caused by a deficiency of insulin in type 1 diabetes or late stage type 2 diabetes but can also be the result of chronic heavy alcohol use, salicylate poisoning, or isopropyl alcohol ingestion. Ketoacidosis causes significant metabolic derangements and is a life-threatening medical emergency. Ketoacidosis is distinct from physiological ketosis as it requires failure of the normal regulation of ketone body production.
Causes
Elevated blood ketone levels are most often caused by accelerated ketone production but may also be caused by consumption of exogenous ketones or precursors.
When glycogen and blood glucose reserves are low, a metabolic shift occurs in order to save glucose for the brain which is unable to use fatty acids for energy. This shift involves increasing fatty acid oxidation and production of ketones in the liver as an alternate energy source for the brain as well as the skeletal muscles, heart, and kidney. Low levels of ketones are always present in the blood and increase under circumstances of low glucose availability. For example, after an overnight fast, 2–6% of energy comes from ketones and this increases to 30–40% after a 3-day fast.
The amount of carbohydrate restriction required to induce a state of ketosis is variable and depends on activity level, insulin sensitivity, genetics, age and other factors, but ketosis will usually occur when consuming less than 50 grams of carbohydrates per day for at least three days.
Neonates, pregnant women and lactating women are populations that develop physiological ketosis especially rapidly in response to energetic challenges such as fasting or illness. This can progress to ketoacidosis in the setting of illness, although it occurs rarely. Propensity for ketone production in neonates is caused by their high-fat breast milk diet, disproportionately large central nervous system and limited liver glycogen.
Biochemistry
The precursors of ketone bodies include fatty acids from adipose tissue or the diet and ketogenic amino acids. The formation of ketone bodies occurs via ketogenesis in the mitochondrial matrix of liver cells.
Fatty acids can be released from adipose tissue by adipokine signaling of high glucagon and epinephrine levels and low insulin levels. High glucagon and low insulin correspond to times of low glucose availability such as fasting. Fatty acids bound to coenzyme A allow penetration into mitochondria. Once inside the mitochondrion, the bound fatty acids are used as fuel in cells predominantly through beta oxidation, which cleaves two carbons from the acyl-CoA molecule in every cycle to form acetyl-CoA. Acetyl-CoA enters the citric acid cycle, where it undergoes an aldol condensation with oxaloacetate to form citric acid; citric acid then enters the tricarboxylic acid cycle (TCA), which harvests a very high energy yield per carbon in the original fatty acid.
Acetyl-CoA can be metabolized through the TCA cycle in any cell, but it can also undergo ketogenesis in the mitochondria of liver cells. When glucose availability is low, oxaloacetate is diverted away from the TCA cycle and is instead used to produce glucose via gluconeogenesis. This utilization of oxaloacetate in gluconeogenesis can make it unavailable to condense with acetyl-CoA, preventing entrance into the TCA cycle. In this scenario, energy can be harvested from acetyl-CoA through ketone production.
In ketogenesis, two acetyl-CoA molecules condense to form acetoacetyl-CoA via thiolase. Acetoacetyl-CoA briefly combines with another acetyl-CoA via HMG-CoA synthase to form hydroxy-β-methylglutaryl-CoA. Hydroxy-β-methylglutaryl-CoA form the ketone body acetoacetate via HMG-CoA lyase. Acetoacetate can then reversibly convert to another ketone body—D-β-hydroxybutyrate—via D-β-hydroxybutyrate dehydrogenase. Alternatively, acetoacetate can spontaneously degrade to a third ketone body (acetone) and carbon dioxide, which generates much greater concentrations of acetoacetate and D-β-hydroxybutyrate. The resulting ketone bodies cannot be used for energy by the liver so are exported from the liver to supply energy to the brain and peripheral tissues.
In addition to fatty acids, deaminated ketogenic amino acids can also be converted into intermediates in the citric acid cycle and produce ketone bodies.
Measurement
Ketone levels can be measured by testing urine, blood or breath. There are limitations in directly comparing these methods as they measure different ketone bodies.
Urine testing
Urine testing is the most common method of testing for ketones. Urine test strips utilize a nitroprusside reaction with acetoacetate to give a semi-quantitative measure based on color change of the strip. Although beta-hydroxybutyrate is the predominant circulating ketone, urine test strips only measure acetoacetate. Urinary ketones often correlate poorly with serum levels because of variability in excretion of ketones by the kidney, influence of hydration status, and renal function.
Serum testing
Finger-stick ketone meters allow instant testing of beta-hydroxybutyrate levels in the blood, similar to glucometers. Beta-hydroxybutrate levels in blood can also be measured in a laboratory.
Medical uses
Epilepsy
Ketosis induced by a ketogenic diet is a long-accepted treatment for refractory epilepsy.
Obesity and metabolic syndrome
Ketosis can improve markers of metabolic syndrome through reduction in serum triglycerides, elevation in high-density lipoprotein (HDL) as well as increased size and volume of low-density lipoprotein (LDL) particles. These changes are consistent with an improved lipid profile despite potential increases in total cholesterol level.
Safety
The safety of ketosis from low-carbohydrate diets is often called into question by clinicians, researchers and the media. A common safety concern stems from the misunderstanding of the difference between physiological ketosis and pathologic ketoacidosis. There is also continued debate whether chronic ketosis is a healthy state or a stressor to be avoided. Some argue that humans evolved to avoid ketosis and should not be in ketosis long-term. The counter-argument is that there is no physiological requirement for dietary carbohydrate as adequate energy can be made via gluconeogenesis and ketogenesis indefinitely. Alternatively, the switching between a ketotic and fed state has been proposed to have beneficial effects on metabolic and neurologic health. The effects of sustaining ketosis for up to two years are known from studies of people following a strict ketogenic diet for epilepsy or type 2 diabetes; these include short-term adverse effects leading to potential long-term ones. However, literature on longer term effects of intermittent ketosis is lacking.
Medication considerations
Some medications require attention when in a state of ketosis, especially several classes of diabetes medication. SGLT2 inhibitor medications have been associated with cases of euglycemic ketoacidosis – a rare state of high ketones causing a metabolic acidosis with normal blood glucose levels. This usually occurs with missed insulin doses, illness, dehydration or adherence to a low-carbohydrate diet while taking the medication. Additionally, medications used to directly lower blood glucose including insulin and sulfonylureas may cause hypoglycemia if they are not titrated prior to starting a diet that results in ketosis.
Adverse effects
There may be side effects when changing over from glucose metabolism to fat metabolism. These may include headache, fatigue, dizziness, insomnia, difficulty in exercise tolerance, constipation, and nausea, especially in the first days and weeks after starting a ketogenic diet. Breath may develop a sweet, fruity flavor via production of acetone that is exhaled because of its high volatility.
Most adverse effects of long-term ketosis reported are in children because of its longstanding acceptance as a treatment for pediatric epilepsy. These include compromised bone health, stunted growth, hyperlipidemia, and kidney stones.
Contraindications
Ketosis induced by a ketogenic diet should not be pursued by people with pancreatitis because of the high dietary fat content. Ketosis is also contraindicated in pyruvate carboxylase deficiency, porphyria, and other rare genetic disorders of fat metabolism.
Veterinary medicine
In dairy cattle, ketosis commonly occurs during the first weeks after giving birth to a calf and is sometimes referred to as acetonemia. This is the result of an energy deficit when intake is inadequate to compensate for the increased metabolic demand of lactating. The elevated β-hydroxybutyrate concentrations can depress gluconeogenesis, feed intake and the immune system, as well as have an impact on milk composition. Point of care diagnostic tests can be useful to screen for ketosis in cattle.
In sheep, ketosis, evidenced by hyperketonemia with beta-hydroxybutyrate in blood over 0.7 mmol/L, is referred to as pregnancy toxemia. This may develop in late pregnancy in ewes bearing multiple fetuses and is associated with the considerable metabolic demands of the pregnancy. In ruminants, because most glucose in the digestive tract is metabolized by rumen organisms, glucose must be supplied by gluconeogenesis. Pregnancy toxemia is most likely to occur in late pregnancy due to metabolic demand from rapid fetal growth and may be triggered by insufficient feed energy intake due to weather conditions, stress or other causes. Prompt recovery may occur with natural parturition, Caesarean section or induced abortion. Prevention through appropriate feeding and other management is more effective than treatment of advanced stages of pregnancy toxemia.
See also
Bioenergetics
Ketonuria
Ketogenic diet
Very-low-calorie diet
Inuit cuisine
References
Further reading
External links
NHS Direct: Ketosis
The Merck Manual —
Diabetic Ketoacidosis
Alcoholic Ketoacidosis
Metabolism | Ketosis | [
"Chemistry",
"Biology"
] | 2,733 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
84,397 | https://en.wikipedia.org/wiki/Deborah%20number | The Deborah number (De) is a dimensionless number, often used in rheology to characterize the fluidity of materials under specific flow conditions. It quantifies the observation that given enough time even a solid-like material might flow, or a fluid-like material can act solid when it is deformed rapidly enough. Materials that have low relaxation times flow easily and as such show relatively rapid stress decay.
Definition
The Deborah number is the ratio of fundamentally different characteristic times. The Deborah number is defined as the ratio of the time it takes for a material to adjust to applied stresses or deformations, and the characteristic time scale of an experiment (or a computer simulation) probing the response of the material:
where stands for the relaxation time and for the "time of observation", typically taken to be the time scale of the process.
The numerator, relaxation time, is the time needed for a reference amount of deformation to occur under a suddenly applied reference load (a more fluid-like material will therefore require less time to flow, giving a lower Deborah number relative to a solid subjected to the same loading rate).
The denominator, material time, is the amount of time required to reach a given reference strain (a faster loading rate will therefore reach the reference strain sooner, giving a higher Deborah number).
Equivalently, the relaxation time is the time required for the stress induced, by a suddenly applied reference strain, to reduce by a certain reference amount. The relaxation time is actually based on the rate of relaxation that exists at the moment of the suddenly applied load.
This incorporates both the elasticity and viscosity of the material. At lower Deborah numbers, the material behaves in a more fluidlike manner, with an associated Newtonian viscous flow. At higher Deborah numbers, the material behavior enters the non-Newtonian regime, increasingly dominated by elasticity and demonstrating solidlike behavior.
For example, for a Hookean elastic solid, the relaxation time will be infinite and it will vanish for a Newtonian viscous fluid. For liquid water, is typically 10−12 s, for lubricating oils passing through gear teeth at high pressure it is of the order of 10−6 s and for polymers undergoing plastics processing, the relaxation time will be of the order of a few seconds. Therefore, depending on the situation, these liquids may exhibit elastic properties, departing from purely viscous behavior.
While is similar to the Weissenberg number and is often confused with it in technical literature, they have different physical interpretations. The Weissenberg number indicates the degree of anisotropy or orientation generated by the deformation, and is appropriate to describe flows with a constant stretch history, such as simple shear. In contrast, the Deborah number should be used to describe flows with a non-constant stretch history, and physically represents the rate at which elastic energy is stored or released.
History
The Deborah number was originally proposed by Markus Reiner, a professor at Technion in Israel, who chose the name inspired by a verse in the Bible, stating "The mountains flowed before the Lord" in a song by the prophetess Deborah in the Book of Judges; הָרִ֥ים נָזְל֖וּ מִפְּנֵ֣י יְהוָ֑ה hā-rîm nāzəlū mippənê Yahweh). In his 1964 paper (a reproduction of his after-dinner speech to the Fourth International Congress on Rheology in 1962), Markus Reiner further elucidated the name's origin:“Deborah knew two things. First, that the mountains flow, as everything flows. But, secondly, that they flowed before the Lord, and not before man, for the simple reason that man in his short lifetime cannot see them flowing, while the time of observation of God is infinite. We may therefore well define a nondimensional number the Deborah number D = time of relaxation/time of observation.”
Time-temperature superposition
The Deborah number is particularly useful in conceptualizing the time–temperature superposition principle. Time-temperature superposition has to do with altering experimental time scales using reference temperatures to extrapolate temperature-dependent mechanical properties of polymers. A material at low temperature with a long experimental or relaxation time behaves like the same material at high temperature and short experimental or relaxation time if the Deborah number remains the same. This can be particularly useful when working with materials which relax on a long time scale under a certain temperature. The practical application of this idea arises in the Williams–Landel–Ferry equation. Time-temperature superposition avoids the inefficiency of measuring a polymer's behavior over long periods of time at a specified temperature by utilizing the Deborah number.
References
Further reading
J.S. Vrentas, C.M. Jarzebski, J.L. Duda (1975) "A Deborah number for diffusion in polymer-solvent systems", AIChE Journal 21(5):894–901, weblink to Wiley Online Library.
Dimensionless numbers of fluid mechanics
Fluid dynamics
Rheology | Deborah number | [
"Chemistry",
"Engineering"
] | 1,026 | [
"Piping",
"Chemical engineering",
"Rheology",
"Fluid dynamics"
] |
84,400 | https://en.wikipedia.org/wiki/Zero-point%20energy | Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity.
The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. One proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel out each other. This idea would be true if supersymmetry were an exact symmetry of nature; however, the Large Hadron Collider at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature".
Etymology and terminology
The term zero-point energy (ZPE) is a translation from the German . Sometimes used interchangeably with it are the terms zero-point radiation and ground state energy. The term zero-point field (ZPF) can be used when referring to a specific vacuum field, for instance the QED vacuum which specifically deals with quantum electrodynamics (e.g., electromagnetic interactions between photons, electrons and the vacuum) or the QCD vacuum which deals with quantum chromodynamics (e.g., color charge interactions between quarks, gluons and the vacuum). A vacuum can be viewed not as empty space but as the combination of all zero-point fields. In quantum field theory this combination of fields is called the vacuum state, its associated zero-point energy is called the vacuum energy and the average energy value is called the vacuum expectation value (VEV) also called its condensate.
Overview
In classical mechanics all particles can be thought of as having some energy made up of their potential energy and kinetic energy. Temperature, for example, arises from the intensity of random particle motion caused by kinetic energy (known as Brownian motion). As temperature is reduced to absolute zero, it might be thought that all motion ceases and particles come completely to rest. In fact, however, kinetic energy is retained by particles even at the lowest possible temperature. The random motion corresponding to this zero-point energy never vanishes; it is a consequence of the uncertainty principle of quantum mechanics.
The uncertainty principle states that no object can ever have precise values of position and velocity simultaneously. The total energy of a quantum mechanical object (potential and kinetic) is described by its Hamiltonian which also describes the system as a harmonic oscillator, or wave function, that fluctuates between various energy states (see wave-particle duality). All quantum mechanical systems undergo fluctuations even in their ground state, a consequence of their wave-like nature. The uncertainty principle requires every quantum mechanical system to have a fluctuating zero-point energy greater than the minimum of its classical potential well. This results in motion even at absolute zero. For example, liquid helium does not freeze under atmospheric pressure regardless of temperature due to its zero-point energy.
Given the equivalence of mass and energy expressed by Albert Einstein's , any point in space that contains energy can be thought of as having mass to create particles. Modern physics has developed quantum field theory (QFT) to understand the fundamental interactions between matter and forces; it treats every single point of space as a quantum harmonic oscillator. According to QFT the universe is made up of matter fields, whose quanta are fermions (i.e. leptons and quarks), and force fields, whose quanta are bosons (e.g. photons and gluons). All these fields have zero-point energy. Recent experiments support the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions of the zero-point field.
The idea that "empty" space can have an intrinsic energy associated with it, and that there is no such thing as a "true vacuum" is seemingly unintuitive. It is often argued that the entire universe is completely bathed in the zero-point radiation, and as such it can add only some constant amount to calculations. Physical measurements will therefore reveal only deviations from this value. For many practical calculations zero-point energy is dismissed by fiat in the mathematical model as a term that has no physical effect. Such treatment causes problems however, as in Einstein's theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant. For decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zero-point energy and make it completely vanish. If the vacuum has no intrinsic, absolute value of energy it will not gravitate. It was believed that as the universe expands from the aftermath of the Big Bang, the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe; galaxies and all matter in the universe should begin to decelerate. This possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating, meaning empty space does indeed have some intrinsic energy. The discovery of dark energy is best explained by zero-point energy, though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problem.
Many physical effects attributed to zero-point energy have been experimentally verified, such as spontaneous emission, Casimir force, Lamb shift, magnetic moment of the electron and Delbrück scattering. These effects are usually called "radiative corrections". In more complex nonlinear theories (e.g. QCD) zero-point energy can give rise to a variety of complex phenomena such as multiple stable states, symmetry breaking, chaos and emergence. Active areas of research include the effects of virtual particles, quantum entanglement, the difference (if any) between inertial and gravitational mass, variation in the speed of light, a reason for the observed value of the cosmological constant and the nature of dark energy.
History
Early aether theories
Zero-point energy evolved from historical ideas about the vacuum. To Aristotle the vacuum was , "the empty"; i.e., space independent of body. He believed this concept violated basic physical principles and asserted that the elements of fire, air, earth, and water were not made of atoms, but were continuous. To the atomists the concept of emptiness had absolute character: it was the distinction between existence and nonexistence. Debate about the characteristics of the vacuum were largely confined to the realm of philosophy, it was not until much later on with the beginning of the renaissance, that Otto von Guericke invented the first vacuum pump and the first testable scientific ideas began to emerge. It was thought that a totally empty volume of space could be created by simply removing all gases. This was the first generally accepted concept of the vacuum.
Late in the 19th century, however, it became apparent that the evacuated region still contained thermal radiation. The existence of the aether as a substitute for a true void was the most prevalent theory of the time. According to the successful electromagnetic aether theory based upon Maxwell's electrodynamics, this all-encompassing aether was endowed with energy and hence very different from nothingness. The fact that electromagnetic and gravitational phenomena were transmitted in empty space was considered evidence that their associated aethers were part of the fabric of space itself. However Maxwell noted that for the most part these aethers were ad hoc:
Moreever, the results of the Michelson–Morley experiment in 1887 were the first strong evidence that the then-prevalent aether theories were seriously flawed, and initiated a line of research that eventually led to special relativity, which ruled out the idea of a stationary aether altogether. To scientists of the period, it seemed that a true vacuum in space might be created by cooling and thus eliminating all radiation or energy. From this idea evolved the second concept of achieving a real vacuum: cool a region of space down to absolute zero temperature after evacuation. Absolute zero was technically impossible to achieve in the 19th century, so the debate remained unsolved.
Second quantum theory
In 1900, Max Planck derived the average energy of a single energy radiator, e.g., a vibrating atomic unit, as a function of absolute temperature:
where is the Planck constant, is the frequency, is the Boltzmann constant, and is the absolute temperature. The zero-point energy makes no contribution to Planck's original law, as its existence was unknown to Planck in 1900.
The concept of zero-point energy was developed by Max Planck in Germany in 1911 as a corrective term added to a zero-grounded formula developed in his original quantum theory in 1900.
In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of . This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one , as an additional term dependent on the frequency , which was greater than zero (where is the Planck constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be:
Soon, the idea of zero-point energy attracted the attention of Albert Einstein and his assistant Otto Stern. In 1913 they published a paper that attempted to prove the existence of zero-point energy by calculating the specific heat of hydrogen gas and compared it with the experimental data. However, after assuming they had succeeded, they retracted support for the idea shortly after publication because they found Planck's second theory may not apply to their example. In a letter to Paul Ehrenfest of the same year Einstein declared zero-point energy "dead as a doornail". Zero-point energy was also invoked by Peter Debye, who noted that zero-point energy of the atoms of a crystal lattice would cause a reduction in the intensity of the diffracted radiation in X-ray diffraction even as the temperature approached absolute zero. In 1916 Walther Nernst proposed that empty space was filled with zero-point electromagnetic radiation. With the development of general relativity Einstein found the energy density of the vacuum to contribute towards a cosmological constant in order to obtain static solutions to his field equations; the idea that empty space, or the vacuum, could have some intrinsic energy associated with it had returned, with Einstein stating in 1920:
and Francis Simon (1923), who worked at Walther Nernst's laboratory in Berlin, studied the melting process of chemicals at low temperatures. Their calculations of the melting points of hydrogen, argon and mercury led them to conclude that the results provided evidence for a zero-point energy. Moreover, they suggested correctly, as was later verified by Simon (1934), that this quantity was responsible for the difficulty in solidifying helium even at absolute zero. In 1924 Robert Mulliken provided direct evidence for the zero-point energy of molecular vibrations by comparing the band spectrum of 10BO and 11BO: the isotopic difference in the transition frequencies between the ground vibrational states of two different electronic levels would vanish if there were no zero-point energy, in contrast to the observed spectra. Then just a year later in 1925, with the development of matrix mechanics in Werner Heisenberg's article "Quantum theoretical re-interpretation of kinematic and mechanical relations" the zero-point energy was derived from quantum mechanics.
In 1913 Niels Bohr had proposed what is now called the Bohr model of the atom, but despite this it remained a mystery as to why electrons do not fall into their nuclei. According to classical ideas, the fact that an accelerating charge loses energy by radiating implied that an electron should spiral into the nucleus and that atoms should not be stable. This problem of classical mechanics was nicely summarized by James Hopwood Jeans in 1915: "There would be a very real difficulty in supposing that the (force) law held down to the zero values of . For the forces between two charges at zero distance would be infinite; we should have charges of opposite sign continually rushing together and, when once together, no force would tend to shrink into nothing or to diminish indefinitely in size." The resolution to this puzzle came in 1926 when Erwin Schrödinger introduced the Schrödinger equation. This equation explained the new, non-classical fact that an electron confined to be close to a nucleus would necessarily have a large kinetic energy so that the minimum total energy (kinetic plus potential) actually occurs at some positive separation rather than at zero separation; in other words, zero-point energy is essential for atomic stability.
Quantum field theory and beyond
In 1926, Pascual Jordan published the first attempt to quantize the electromagnetic field. In a joint paper with Max Born and Werner Heisenberg he considered the field inside a cavity as a superposition of quantum harmonic oscillators. In his calculation he found that in addition to the "thermal energy" of the oscillators there also had to exist an infinite zero-point energy term. He was able to obtain the same fluctuation formula that Einstein had obtained in 1909. However, Jordan did not think that his infinite zero-point energy term was "real", writing to Einstein that "it is just a quantity of the calculation having no direct physical meaning". Jordan found a way to get rid of the infinite term, publishing a joint work with Pauli in 1928, performing what has been called "the first infinite subtraction, or renormalisation, in quantum field theory".
Building on the work of Heisenberg and others, Paul Dirac's theory of emission and absorption (1927) was the first application of the quantum theory of radiation. Dirac's work was seen as crucially important to the emerging field of quantum mechanics; it dealt directly with the process in which "particles" are actually created: spontaneous emission. Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. The theory showed that spontaneous emission depends upon the zero-point energy fluctuations of the electromagnetic field in order to get started. In a process in which a photon is annihilated (absorbed), the photon can be thought of as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. In the words of Dirac:
Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. This view was popularized by Victor Weisskopf who in 1935 wrote:
This view was also later supported by Theodore Welton (1948), who argued that spontaneous emission "can be thought of as forced emission taking place under the action of the fluctuating field". This new theory, which Dirac coined quantum electrodynamics (QED), predicted a fluctuating zero-point or "vacuum" field existing even in the absence of sources.
Throughout the 1940s improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift, and measurement of the magnetic moment of the electron. Discrepancies between these experiments and Dirac's theory led to the idea of incorporating renormalisation into QED to deal with zero-point infinities. Renormalization was originally developed by Hans Kramers and also Victor Weisskopf (1936), and first successfully applied to calculate a finite value for the Lamb shift by Hans Bethe (1947). As per spontaneous emission, these effects can in part be understood with interactions with the zero-point field. But in light of renormalisation being able to remove some zero-point infinities from calculations, not all physicists were comfortable attributing zero-point energy any physical meaning, viewing it instead as a mathematical artifact that might one day be eliminated. In Wolfgang Pauli's 1945 Nobel lecture he made clear his opposition to the idea of zero-point energy stating "It is clear that this zero-point energy has no physical reality".
In 1948 Hendrik Casimir showed that one consequence of the zero-point field is an attractive force between two uncharged, perfectly conducting parallel plates, the so-called Casimir effect. At the time, Casimir was studying the properties of colloidal solutions. These are viscous materials, such as paint and mayonnaise, that contain micron-sized particles in a liquid matrix. The properties of such solutions are determined by Van der Waals forces – short-range, attractive forces that exist between neutral atoms and molecules. One of Casimir's colleagues, Theo Overbeek, realized that the theory that was used at the time to explain Van der Waals forces, which had been developed by Fritz London in 1930, did not properly explain the experimental measurements on colloids. Overbeek therefore asked Casimir to investigate the problem. Working with Dirk Polder, Casimir discovered that the interaction between two neutral molecules could be correctly described only if the fact that light travels at a finite speed was taken into account. Soon afterwards after a conversation with Bohr about zero-point energy, Casimir noticed that this result could be interpreted in terms of vacuum fluctuations. He then asked himself what would happen if there were two mirrors – rather than two molecules – facing each other in a vacuum. It was this work that led to his prediction of an attractive force between reflecting plates. The work by Casimir and Polder opened up the way to a unified theory of van der Waals and Casimir forces and a smooth continuum between the two phenomena. This was done by Lifshitz (1956) in the case of plane parallel dielectric plates. The generic name for both van der Waals and Casimir forces is dispersion forces, because both of them are caused by dispersions of the operator of the dipole moment. The role of relativistic forces becomes dominant at orders of a hundred nanometers.
In 1951 Herbert Callen and Theodore Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. The fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. FDT has been shown to be true experimentally under certain quantum, non-classical, conditions.
In 1963 the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave nonintuitive predictions such as that an atom's spontaneous emission could be driven by field of effectively constant frequency (Rabi frequency). In the 1970s experiments were being performed to test aspects of quantum optics and showed that the rate of spontaneous emission of an atom could be controlled using reflecting surfaces. These results were at first regarded with suspicion in some quarters: it was argued that no modification of a spontaneous emission rate would be possible, after all, how can the emission of a photon be affected by an atom's environment when the atom can only "see" its environment by emitting a photon in the first place? These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. Spontaneous emission can be suppressed (or "inhibited") or amplified. Amplification was first predicted by Purcell in 1946 (the Purcell effect) and has been experimentally verified. This phenomenon can be understood, partly, in terms of the action of the vacuum field on the atom.
Uncertainty principle
Zero-point energy is fundamentally related to the Heisenberg uncertainty principle. Roughly speaking, the uncertainty principle states that complementary variables (such as a particle's position and momentum, or a field's value and derivative at a point in space) cannot simultaneously be specified precisely by any given quantum state. In particular, there cannot exist a state in which the system simply sits motionless at the bottom of its potential well, for then its position and momentum would both be completely determined to arbitrarily great precision. Therefore, the lowest-energy state (the ground state) of the system must have a distribution in position and momentum that satisfies the uncertainty principle, which implies its energy must be greater than the minimum of the potential well.
Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving its energy) can be approximated as a quantum harmonic oscillator,
where is the minimum of the classical potential well.
The uncertainty principle tells us that
making the expectation values of the kinetic and potential terms above satisfy
The expectation value of the energy must therefore be at least
where is the angular frequency at which the system oscillates.
A more thorough treatment, showing that the energy of the ground state actually saturates this bound and is exactly , requires solving for the ground state of the system.
Atomic physics
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by above, using angular frequency, denoted with and defined by . This leads to a convention of writing the Planck constant with a bar through its top () to denote the quantity . In these terms, an example of zero-point energy is the above associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator which acts non-trivially on a ground state and commutes with the Hamiltonian of the system.
According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature.
The wave function of the ground state of a particle in a one-dimensional well is a half-period sine wave which goes to zero at the two edges of the well. The energy of the particle is given by:
where is the Planck constant, is the mass of the particle, is the energy state ( corresponds to the ground-state energy), and is the width of the well.
Quantum field theory
In quantum field theory (QFT), the fabric of "empty" space is visualized as consisting of fields, with the field at every point in space and time being a quantum harmonic oscillator, with neighboring oscillators interacting with each other. According to QFT the universe is made up of matter fields whose quanta are fermions (e.g. electrons and quarks), force fields whose quanta are bosons (i.e. photons and gluons) and a Higgs field whose quantum is the Higgs boson. The matter and force fields have zero-point energy. A related term is zero-point field (ZPF), which is the lowest energy state of a particular field. The vacuum can be viewed not as empty space, but as the combination of all zero-point fields.
In QFT the zero-point energy of the vacuum state is called the vacuum energy and the average expectation value of the Hamiltonian is called the vacuum expectation value (also called condensate or simply VEV). The QED vacuum is a part of the vacuum state which specifically deals with quantum electrodynamics (e.g. electromagnetic interactions between photons, electrons and the vacuum) and the QCD vacuum deals with quantum chromodynamics (e.g. color charge interactions between quarks, gluons and the vacuum). Recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum, and that all properties of matter are merely vacuum fluctuations arising from interactions with the zero-point field.
Each point in space makes a contribution of , resulting in a calculation of infinite zero-point energy in any finite volume; this is one reason renormalization is needed to make sense of quantum field theories. In cosmology, the vacuum energy is one possible explanation for the cosmological constant and the source of dark energy.
Scientists are not in agreement about how much energy is contained in the vacuum. Quantum mechanics requires the energy to be large as Paul Dirac claimed it is, like a sea of energy. Other scientists specializing in General Relativity require the energy to be small enough for curvature of space to agree with observed astronomy. The Heisenberg uncertainty principle allows the energy to be as large as needed to promote quantum actions for a brief moment of time, even if the average energy is small enough to satisfy relativity and flat space. To cope with disagreements, the vacuum energy is described as a virtual energy potential of positive and negative energy.
In quantum perturbation theory, it is sometimes said that the contribution of one-loop and multi-loop Feynman diagrams to elementary particle propagators are the contribution of vacuum fluctuations, or the zero-point energy to the particle masses.
Quantum electrodynamic vacuum
The oldest and best known quantized force field is the electromagnetic field. Maxwell's equations have been superseded by quantum electrodynamics (QED). By considering the zero-point energy that arises from QED it is possible to gain a characteristic understanding of zero-point energy that arises not just through electromagnetic interactions but in all quantum field theories.
Redefining the zero of energy
In the quantum theory of the electromagnetic field, classical wave amplitudes and are replaced by operators and that satisfy:
The classical quantity appearing in the classical expression for the energy of a field mode is replaced in quantum theory by the photon number operator . The fact that:
implies that quantum theory does not allow states of the radiation field for which the photon number and a field amplitude can be precisely defined, i.e., we cannot have simultaneous eigenstates for and . The reconciliation of wave and particle attributes of the field is accomplished via the association of a probability amplitude with a classical mode pattern. The calculation of field modes is entirely classical problem, while the quantum properties of the field are carried by the mode "amplitudes" and associated with these classical modes.
The zero-point energy of the field arises formally from the non-commutativity of and . This is true for any harmonic oscillator: the zero-point energy appears when we write the Hamiltonian:
It is often argued that the entire universe is completely bathed in the zero-point electromagnetic field, and as such it can add only some constant amount to expectation values. Physical measurements will therefore reveal only deviations from the vacuum state. Thus the zero-point energy can be dropped from the Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion. Thus we can choose to declare by fiat that the ground state has zero energy and a field Hamiltonian, for example, can be replaced by:
without affecting any physical predictions of the theory. The new Hamiltonian is said to be normally ordered (or Wick ordered) and is denoted by a double-dot symbol. The normally ordered Hamiltonian is denoted , i.e.:
In other words, within the normal ordering symbol we can commute and . Since zero-point energy is intimately connected to the non-commutativity of and , the normal ordering procedure eliminates any contribution from the zero-point field. This is especially reasonable in the case of the field Hamiltonian, since the zero-point term merely adds a constant energy which can be eliminated by a simple redefinition for the zero of energy. Moreover, this constant energy in the Hamiltonian obviously commutes with and and so cannot have any effect on the quantum dynamics described by the Heisenberg equations of motion.
However, things are not quite that simple. The zero-point energy cannot be eliminated by dropping its energy from the Hamiltonian: When we do this and solve the Heisenberg equation for a field operator, we must include the vacuum field, which is the homogeneous part of the solution for the field operator. In fact we can show that the vacuum field is essential for the preservation of the commutators and the formal consistency of QED. When we calculate the field energy we obtain not only a contribution from particles and forces that may be present but also a contribution from the vacuum field itself i.e. the zero-point field energy. In other words, the zero-point energy reappears even though we may have deleted it from the Hamiltonian.
Electromagnetic field in free space
From Maxwell's equations, the electromagnetic energy of a "free" field i.e. one with no sources, is described by:
We introduce the "mode function" that satisfies the Helmholtz equation:
where and assume it is normalized such that:
We wish to "quantize" the electromagnetic energy of free space for a multimode field. The field intensity of free space should be independent of position such that should be independent of for each mode of the field. The mode function satisfying these conditions is:
where in order to have the transversality condition satisfied for the Coulomb gauge in which we are working.
To achieve the desired normalization we pretend space is divided into cubes of volume and impose on the field the periodic boundary condition:
or equivalently
where can assume any integer value. This allows us to consider the field in any one of the imaginary cubes and to define the mode function:
which satisfies the Helmholtz equation, transversality, and the "box normalization":
where is chosen to be a unit vector which specifies the polarization of the field mode. The condition means that there are two independent choices of , which we call and where and . Thus we define the mode functions:
in terms of which the vector potential becomes:
or:
where and , are photon annihilation and creation operators for the mode with wave vector and polarization . This gives the vector potential for a plane wave mode of the field. The condition for shows that there are infinitely many such modes. The linearity of Maxwell's equations allows us to write:
for the total vector potential in free space. Using the fact that:
we find the field Hamiltonian is:
This is the Hamiltonian for an infinite number of uncoupled harmonic oscillators. Thus different modes of the field are independent and satisfy the commutation relations:
Clearly the least eigenvalue for is:
This state describes the zero-point energy of the vacuum. It appears that this sum is divergent – in fact highly divergent, as putting in the density factor
shows. The summation becomes approximately the integral:
for high values of . It diverges proportional to for large .
There are two separate questions to consider. First, is the divergence a real one such that the zero-point energy really is infinite? If we consider the volume is contained by perfectly conducting walls, very high frequencies can only be contained by taking more and more perfect conduction. No actual method of containing the high frequencies is possible. Such modes will not be stationary in our box and thus not countable in the stationary energy content. So from this physical point of view the above sum should only extend to those frequencies which are countable; a cut-off energy is thus eminently reasonable. However, on the scale of a "universe" questions of general relativity must be included. Suppose even the boxes could be reproduced, fit together and closed nicely by curving spacetime. Then exact conditions for running waves may be possible. However the very high frequency quanta will still not be contained. As per John Wheeler's "geons" these will leak out of the system. So again a cut-off is permissible, almost necessary. The question here becomes one of consistency since the very high energy quanta will act as a mass source and start curving the geometry.
This leads to the second question. Divergent or not, finite or infinite, is the zero-point energy of any physical significance? The ignoring of the whole zero-point energy is often encouraged for all practical calculations. The reason for this is that energies are not typically defined by an arbitrary data point, but rather changes in data points, so adding or subtracting a constant (even if infinite) should be allowed. However this is not the whole story, in reality energy is not so arbitrarily defined: in general relativity the seat of the curvature of spacetime is the energy content and there the absolute amount of energy has real physical meaning. There is no such thing as an arbitrary additive constant with density of field energy. Energy density curves space, and an increase in energy density produces an increase of curvature. Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out.
Necessity of the vacuum field in QED
The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which for all modes . The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero.
In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by is infinite. We can make the replacement:
the zero-point energy density is:
or in other words the spectral energy density of the vacuum field:
The zero-point energy density in the frequency range from to is therefore:
This can be large even in relatively narrow "low frequency" regions of the spectrum. In the optical region from 400 to 700 nm, for instance, the above equation yields around 220 erg/cm3.
We showed in the above section that the zero-point energy can be eliminated from the Hamiltonian by the normal ordering prescription. However, this elimination does not mean that the vacuum field has been rendered unimportant or without physical consequences. To illustrate this point we consider a linear dipole oscillator in the vacuum. The Hamiltonian for the oscillator plus the field with which it interacts is:
This has the same form as the corresponding classical Hamiltonian and the Heisenberg equations of motion for the oscillator and the field are formally the same as their classical counterparts. For instance the Heisenberg equations for the coordinate and the canonical momentum of the oscillator are:
or:
since the rate of change of the vector potential in the frame of the moving charge is given by the convective derivative
For nonrelativistic motion we may neglect the magnetic force and replace the expression for by:
Above we have made the electric dipole approximation in which the spatial dependence of the field is neglected. The Heisenberg equation for is found similarly from the Hamiltonian to be:
in the electric dipole approximation.
In deriving these equations for , , and we have used the fact that equal-time particle and field operators commute. This follows from the assumption that particle and field operators commute at some time (say, ) when the matter-field interpretation is presumed to begin, together with the fact that a Heisenberg-picture operator evolves in time as , where is the time evolution operator satisfying
Alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the Hamiltonian, just as the corresponding Poisson brackets in classical theory must vanish in order to generate the correct Hamilton equations. The formal solution of the field equation is:
and therefore the equation for may be written:
where
and
It can be shown that in the radiation reaction field, if the mass is regarded as the "observed" mass then we can take
The total field acting on the dipole has two parts, and . is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution, at the position of the dipole, of the wave equation
satisfied by the field in the (source free) vacuum. For this reason is often referred to as the "vacuum field", although it is of course a Heisenberg-picture operator acting on whatever state of the field happens to be appropriate at . is the source field, the field generated by the dipole and acting on the dipole.
Using the above equation for we obtain an equation for the Heisenberg-picture operator that is formally the same as the classical equation for a linear dipole oscillator:
where . in this instance we have considered a dipole in the vacuum, without any "external" field acting on it. the role of the external field in the above equation is played by the vacuum electric field acting on the dipole.
Classically, a dipole in the vacuum is not acted upon by any "external" field: if there are no sources other than the dipole itself, then the only field acting on the dipole is its own radiation reaction field. In quantum theory however there is always an "external" field, namely the source-free or vacuum field .
According to our earlier equation for the free field is the only field in existence at as the time at which the interaction between the dipole and the field is "switched on". The state vector of the dipole-field system at is therefore of the form
where is the vacuum state of the field and is the initial state of the dipole oscillator. The expectation value of the free field is therefore at all times equal to zero:
since . however, the energy density associated with the free field is infinite:
The important point of this is that the zero-point field energy does not affect the Heisenberg equation for since it is a c-number or constant (i.e. an ordinary number rather than an operator) and commutes with . We can therefore drop the zero-point field energy from the Hamiltonian, as is usually done. But the zero-point field re-emerges as the homogeneous solution for the field equation. A charged particle in the vacuum will therefore always see a zero-point field of infinite density. This is the origin of one of the infinities of quantum electrodynamics, and it cannot be eliminated by the trivial expedient dropping of the term in the field Hamiltonian.
The free field is in fact necessary for the formal consistency of the theory. In particular, it is necessary for the preservation of the commutation relations, which is required by the unitary of time evolution in quantum theory:
We can calculate from the formal solution of the operator equation of motion
Using the fact that
and that equal-time particle and field operators commute, we obtain:
For the dipole oscillator under consideration it can be assumed that the radiative damping rate is small compared with the natural oscillation frequency, i.e., . Then the integrand above is sharply peaked at and:
the necessity of the vacuum field can also be appreciated by making the small damping approximation in
and
Without the free field in this equation the operator would be exponentially dampened, and commutators like would approach zero for . With the vacuum field included, however, the commutator is at all times, as required by unitarity, and as we have just shown. A similar result is easily worked out for the case of a free particle instead of a dipole oscillator.
What we have here is an example of a "fluctuation-dissipation elation". Generally speaking if a system is coupled to a bath that can take energy from the system in an effectively irreversible way, then the bath must also cause fluctuations. The fluctuations and the dissipation go hand in hand we cannot have one without the other. In the current example the coupling of a dipole oscillator to the electromagnetic field has a dissipative component, in the form of the zero-point (vacuum) field; given the existence of radiation reaction, the vacuum field must also exist in order to preserve the canonical commutation rule and all it entails.
The spectral density of the vacuum field is fixed by the form of the radiation reaction field, or vice versa: because the radiation reaction field varies with the third derivative of , the spectral energy density of the vacuum field must be proportional to the third power of in order for to hold. In the case of a dissipative force proportional to , by contrast, the fluctuation force must be proportional to in order to maintain the canonical commutation relation. This relation between the form of the dissipation and the spectral density of the fluctuation is the essence of the fluctuation-dissipation theorem.
The fact that the canonical commutation relation for a harmonic oscillator coupled to the vacuum field is preserved implies that the zero-point energy of the oscillator is preserved. it is easy to show that after a few damping times the zero-point motion of the oscillator is in fact sustained by the driving zero-point field.
Quantum chromodynamic vacuum
The QCD vacuum is the vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by a non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. In technical terms, gluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions.
Higgs field
The Standard Model hypothesises a field called the Higgs field (symbol: ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of in the ground state (the vacuum expectation value or VEV) is then , where . The measured value of this parameter is approximately . It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number.
The Higgs mechanism is a type of superconductivity which occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged and thus the field has a nonzero vacuum expectation value. Interaction with the vacuum energy filling the space prevents certain forces from propagating over long distances (as it does in a superconducting medium; e.g., in the Ginzburg–Landau theory).
Experimental observations
Zero-point energy has many observed physical consequences. It is important to note that zero-point energy is not merely an artifact of mathematical formalism that can, for instance, be dropped from a Hamiltonian by redefining the zero of energy, or by arguing that it is a constant and therefore has no effect on Heisenberg equations of motion without latter consequence. Indeed, such treatment could create a problem at a deeper, as of yet undiscovered, theory. For instance, in general relativity the zero of energy (i.e. the energy density of the vacuum) contributes to a cosmological constant of the type introduced by Einstein in order to obtain static solutions to his field equations. The zero-point energy density of the vacuum, due to all quantum fields, is extremely large, even when we cut off the largest allowable frequencies based on plausible physical arguments. It implies a cosmological constant larger than the limits imposed by observation by about 120 orders of magnitude. This "cosmological constant problem" remains one of the greatest unsolved mysteries of physics.
Casimir effect
A phenomenon that is commonly presented as evidence for the existence of zero-point energy in vacuum is the Casimir effect, proposed in 1948 by Dutch physicist Hendrik Casimir, who considered the quantized electromagnetic field between a pair of grounded, neutral metal plates. The vacuum energy contains contributions from all wavelengths, except those excluded by the spacing between plates. As the plates draw together, more wavelengths are excluded and the vacuum energy decreases. The decrease in energy means there must be a force doing work on the plates as they move.
Early experimental tests from the 1950s onwards gave positive results showing the force was real, but other external factors could not be ruled out as the primary cause, with the range of experimental error sometimes being nearly 100%. That changed in 1997 with Lamoreaux conclusively showing that the Casimir force was real. Results have been repeatedly replicated since then.
In 2009, Munday et al. published experimental proof that (as predicted in 1961) the Casimir force could also be repulsive as well as being attractive. Repulsive Casimir forces could allow quantum levitation of objects in a fluid and lead to a new class of switchable nanoscale devices with ultra-low static friction.
An interesting hypothetical side effect of the Casimir effect is the Scharnhorst effect, a hypothetical phenomenon in which light signals travel slightly faster than between two closely spaced conducting plates.
Lamb shift
The quantum fluctuations of the electromagnetic field have important physical consequences. In addition to the Casimir effect, they also lead to a splitting between the two energy levels and (in term symbol notation) of the hydrogen atom which was not predicted by the Dirac equation, according to which these states should have the same energy. Charged particles can interact with the fluctuations of the quantized vacuum field, leading to slight shifts in energy; this effect is called the Lamb shift. The shift of about is roughly of the difference between the energies of the 1s and 2s levels, and amounts to 1,058 MHz in frequency units. A small part of this shift (27 MHz ≈ 3%) arises not from fluctuations of the electromagnetic field, but from fluctuations of the electron–positron field. The creation of (virtual) electron–positron pairs has the effect of screening the Coulomb field and acts as a vacuum dielectric constant. This effect is much more important in muonic atoms.
Fine-structure constant
Taking (the Planck constant divided by ), (the speed of light), and (the electromagnetic coupling constant i.e. a measure of the strength of the electromagnetic force (where is the absolute value of the electronic charge and is the vacuum permittivity)) we can form a dimensionless quantity called the fine-structure constant:
The fine-structure constant is the coupling constant of quantum electrodynamics (QED) determining the strength of the interaction between electrons and photons. It turns out that the fine-structure constant is not really a constant at all owing to the zero-point energy fluctuations of the electron-positron field. The quantum fluctuations caused by zero-point energy have the effect of screening electric charges: owing to (virtual) electron-positron pair production, the charge of the particle measured far from the particle is far smaller than the charge measured when close to it.
The Heisenberg inequality where , and , are the standard deviations of position and momentum states that:
It means that a short distance implies large momentum and therefore high energy i.e. particles of high energy must be used to explore short distances. QED concludes that the fine-structure constant is an increasing function of energy. It has been shown that at energies of the order of the Z0 boson rest energy, 90 GeV, that:
rather than the low-energy . The renormalization procedure of eliminating zero-point energy infinities allows the choice of an arbitrary energy (or distance) scale for defining . All in all, depends on the energy scale characteristic of the process under study, and also on details of the renormalization procedure. The energy dependence of has been observed for several years now in precision experiment in high-energy physics.
Vacuum birefringence
In the presence of strong electrostatic fields it is predicted that virtual particles become separated from the vacuum state and form real matter. The fact that electromagnetic radiation can be transformed into matter and vice versa leads to fundamentally new features in quantum electrodynamics. One of the most important consequences is that, even in the vacuum, the Maxwell equations have to be exchanged by more complicated formulas. In general, it will be not possible to separate processes in the vacuum from the processes involving matter since electromagnetic fields can create matter if the field fluctuations are strong enough. This leads to highly complex nonlinear interaction – gravity will have an effect on the light at the same time the light has an effect on gravity. These effects were first predicted by Werner Heisenberg and Hans Heinrich Euler in 1936 and independently the same year by Victor Weisskopf who stated: "The physical properties of the vacuum originate in the "zero-point energy" of matter, which also depends on absent particles through the external field strengths and therefore contributes an additional term to the purely Maxwellian field energy". Thus strong magnetic fields vary the energy contained in the vacuum. The scale above which the electromagnetic field is expected to become nonlinear is known as the Schwinger limit. At this point the vacuum has all the properties of a birefringent medium, thus in principle a rotation of the polarization frame (the Faraday effect) can be observed in empty space.
Both Einstein's theory of special and general relativity state that light should pass freely through a vacuum without being altered, a principle known as Lorentz invariance. Yet, in theory, large nonlinear self-interaction of light due to quantum fluctuations should lead to this principle being measurably violated if the interactions are strong enough. Nearly all theories of quantum gravity predict that Lorentz invariance is not an exact symmetry of nature. It is predicted the speed at which light travels through the vacuum depends on its direction, polarization and the local strength of the magnetic field. There have been a number of inconclusive results which claim to show evidence of a Lorentz violation by finding a rotation of the polarization plane of light coming from distant galaxies. The first concrete evidence for vacuum birefringence was published in 2017 when a team of astronomers looked at the light coming from the star RX J1856.5-3754, the closest discovered neutron star to Earth.
Roberto Mignani at the National Institute for Astrophysics in Milan who led the team of astronomers has commented that "When Einstein came up with the theory of general relativity 100 years ago, he had no idea that it would be used for navigational systems. The consequences of this discovery probably will also have to be realised on a longer timescale." The team found that visible light from the star had undergone linear polarisation of around 16%. If the birefringence had been caused by light passing through interstellar gas or plasma, the effect should have been no more than 1%. Definitive proof would require repeating the observation at other wavelengths and on other neutron stars. At X-ray wavelengths the polarization from the quantum fluctuations should be near 100%. Although no telescope currently exists that can make such measurements, there are several proposed X-ray telescopes that may soon be able to verify the result conclusively such as China's Hard X-ray Modulation Telescope (HXMT) and NASA's Imaging X-ray Polarimetry Explorer (IXPE).
Speculated involvement in other phenomena
Dark energy
In the late 1990s it was discovered that very distant supernovae were dimmer than expected suggesting that the universe's expansion was accelerating rather than slowing down. This revived discussion that Einstein's cosmological constant, long disregarded by physicists as being equal to zero, was in fact some small positive value. This would indicate empty space exerted some form of negative pressure or energy.
There is no natural candidate for what might cause what has been called dark energy but the current best guess is that it is the zero-point energy of the vacuum, but this guess is known to be off by 120 orders of magnitude.
The European Space Agency's Euclid telescope, launched on 1 July 2023, will map galaxies up to 10 billion light years away. By seeing how dark energy influences their arrangement and shape, the mission will allow scientists to see if the strength of dark energy has changed. If dark energy is found to vary throughout time it would indicate it is due to quintessence, where observed acceleration is due to the energy of a scalar field, rather than the cosmological constant. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses again due to zero-point energy.
Cosmic inflation
Cosmic inflation is phase of accelerated cosmic expansion just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed.
The mechanism for inflation is unclear, it is similar in effect to dark energy but is a far more energetic and short lived process. As with dark energy the best explanation is some form of vacuum energy arising from quantum fluctuations. It may be that inflation caused baryogenesis, the hypothetical physical processes that produced an asymmetry (imbalance) between baryons and antibaryons produced in the very early universe, but this is far from certain.
Cosmology
Paul S. Wesson examined the cosmological implications of assuming that zero-point energy is real. Among numerous difficulties, general relativity requires that such energy not gravitate, so it cannot be similar to electromagnetic radiation.
Alternative theories
There has been a long debate over the question of whether zero-point fluctuations of quantized vacuum fields are "real" i.e. do they have physical effects that cannot be interpreted by an equally valid alternative theory? Schwinger, in particular, attempted to formulate QED without reference to zero-point fluctuations via his "source theory". From such an approach it is possible to derive the Casimir Effect without reference to a fluctuating field. Such a derivation was first given by Schwinger (1975) for a scalar field, and then generalized to the electromagnetic case by Schwinger, DeRaad, and Milton (1978). in which they state "the vacuum is regarded as truly a state with all physical properties equal to zero". Jaffe (2005) has highlighted a similar approach in deriving the Casimir effect stating "the concept of zero-point fluctuations is a heuristic and calculational aid in the description of the Casimir effect, but not a necessity in QED."
Milonni has shown the necessity of the vacuum field for the formal consistency of QED. Modern physics does not know any better way to construct gauge-invariant, renormalizable theories than with zero-point energy and they would seem to be a necessity for any attempt at a unified theory.
Nevertheless, as pointed out by Jaffe, "no
known phenomenon, including the Casimir effect, demonstrates that zero point energies are “real”"
Chaotic and emergent phenomena
The mathematical models used in classical electromagnetism, quantum electrodynamics (QED) and the Standard Model all view the electromagnetic vacuum as a linear system with no overall observable consequence. For example, in the case of the Casimir effect, Lamb shift, and so on these phenomena can be explained by alternative mechanisms other than action of the vacuum by arbitrary changes to the normal ordering of field operators. See the alternative theories section. This is a consequence of viewing electromagnetism as a U(1) gauge theory, which topologically does not allow the complex interaction of a field with and on itself. In higher symmetry groups and in reality, the vacuum is not a calm, randomly fluctuating, largely immaterial and passive substance, but at times can be viewed as a turbulent virtual plasma that can have complex vortices (i.e. solitons vis-à-vis particles), entangled states and a rich nonlinear structure. There are many observed nonlinear physical electromagnetic phenomena such as Aharonov–Bohm (AB) and Altshuler–Aronov–Spivak (AAS) effects, Berry, Aharonov–Anandan, Pancharatnam and Chiao–Wu phase rotation effects, Josephson effect, Quantum Hall effect, the De Haas–Van Alphen effect, the Sagnac effect and many other physically observable phenomena which would indicate that the electromagnetic potential field has real physical meaning rather than being a mathematical artifact and therefore an all encompassing theory would not confine electromagnetism as a local force as is currently done, but as a SU(2) gauge theory or higher geometry. Higher symmetries allow for nonlinear, aperiodic behaviour which manifest as a variety of complex non-equilibrium phenomena that do not arise in the linearised U(1) theory, such as multiple stable states, symmetry breaking, chaos and emergence.
What are called Maxwell's equations today, are in fact a simplified version of the original equations reformulated by Heaviside, FitzGerald, Lodge and Hertz. The original equations used Hamilton's more expressive quaternion notation, a kind of Clifford algebra, which fully subsumes the standard Maxwell vectorial equations largely used today. In the late 1880s there was a debate over the relative merits of vector analysis and quaternions. According to Heaviside the electromagnetic potential field was purely metaphysical, an arbitrary mathematical fiction, that needed to be "murdered". It was concluded that there was no need for the greater physical insights provided by the quaternions if the theory was purely local in nature. Local vector analysis has become the dominant way of using Maxwell's equations ever since. However, this strictly vectorial approach has led to a restrictive topological understanding in some areas of electromagnetism, for example, a full understanding of the energy transfer dynamics in Tesla's oscillator-shuttle-circuit can only be achieved in quaternionic algebra or higher SU(2) symmetries. It has often been argued that quaternions are not compatible with special relativity, but multiple papers have shown ways of incorporating relativity.
A good example of nonlinear electromagnetics is in high energy dense plasmas, where vortical phenomena occur which seemingly violate the second law of thermodynamics by increasing the energy gradient within the electromagnetic field and violate Maxwell's laws by creating ion currents which capture and concentrate their own and surrounding magnetic fields. In particular Lorentz force law, which elaborates Maxwell's equations is violated by these force free vortices. These apparent violations are due to the fact that the traditional conservation laws in classical and quantum electrodynamics (QED) only display linear U(1) symmetry (in particular, by the extended Noether theorem, conservation laws such as the laws of thermodynamics need not always apply to dissipative systems, which are expressed in gauges of higher symmetry). The second law of thermodynamics states that in a closed linear system entropy flow can only be positive (or exactly zero at the end of a cycle). However, negative entropy (i.e. increased order, structure or self-organisation) can spontaneously appear in an open nonlinear thermodynamic system that is far from equilibrium, so long as this emergent order accelerates the overall flow of entropy in the total system. The 1977 Nobel Prize in Chemistry was awarded to thermodynamicist Ilya Prigogine for his theory of dissipative systems that described this notion. Prigogine described the principle as "order through fluctuations" or "order out of chaos". It has been argued by some that all emergent order in the universe from galaxies, solar systems, planets, weather, complex chemistry, evolutionary biology to even consciousness, technology and civilizations are themselves examples of thermodynamic dissipative systems; nature having naturally selected these structures to accelerate entropy flow within the universe to an ever-increasing degree. For example, it has been estimated that human body is 10,000 times more effective at dissipating energy per unit of mass than the sun.
One may query what this has to do with zero-point energy. Given the complex and adaptive behaviour that arises from nonlinear systems considerable attention in recent years has gone into studying a new class of phase transitions which occur at absolute zero temperature. These are quantum phase transitions which are driven by EM field fluctuations as a consequence of zero-point energy. A good example of a spontaneous phase transition that are attributed to zero-point fluctuations can be found in superconductors. Superconductivity is one of the best known empirically quantified macroscopic electromagnetic phenomena whose basis is recognised to be quantum mechanical in origin. The behaviour of the electric and magnetic fields under superconductivity is governed by the London equations. However, it has been questioned in a series of journal articles whether the quantum mechanically canonised London equations can be given a purely classical derivation. Bostick, for instance, has claimed to show that the London equations do indeed have a classical origin that applies to superconductors and to some collisionless plasmas as well. In particular it has been asserted that the Beltrami vortices in the plasma focus display the same paired flux-tube morphology as Type II superconductors. Others have also pointed out this connection, Fröhlich has shown that the hydrodynamic equations of compressible fluids, together with the London equations, lead to a macroscopic parameter ( = electric charge density / mass density), without involving either quantum phase factors or the Planck constant. In essence, it has been asserted that Beltrami plasma vortex structures are able to at least simulate the morphology of Type I and Type II superconductors. This occurs because the "organised" dissipative energy of the vortex configuration comprising the ions and electrons far exceeds the "disorganised" dissipative random thermal energy. The transition from disorganised fluctuations to organised helical structures is a phase transition involving a change in the condensate's energy (i.e. the ground state or zero-point energy) but without any associated rise in temperature. This is an example of zero-point energy having multiple stable states (see Quantum phase transition, Quantum critical point, Topological degeneracy, Topological order) and where the overall system structure is independent of a reductionist or deterministic view, that "classical" macroscopic order can also causally affect quantum phenomena. Furthermore, the pair production of Beltrami vortices has been compared to the morphology of pair production of virtual particles in the vacuum.
The idea that the vacuum energy can have multiple stable energy states is a leading hypothesis for the cause of cosmic inflation. In fact, it has been argued that these early vacuum fluctuations led to the expansion of the universe and in turn have guaranteed the non-equilibrium conditions necessary to drive order from chaos, as without such expansion the universe would have reached thermal equilibrium and no complexity could have existed. With the continued accelerated expansion of the universe, the cosmos generates an energy gradient that increases the "free energy" (i.e. the available, usable or potential energy for useful work) which the universe is able to use to create ever more complex forms of order. The only reason Earth's environment does not decay into an equilibrium state is that it receives a daily dose of sunshine and that, in turn, is due to the sun "polluting" interstellar space with entropy. The sun's fusion power is only possible due to the gravitational disequilibrium of matter that arose from cosmic expansion. In this essence, the vacuum energy can be viewed as the key cause of the structure throughout the universe. That humanity might alter the morphology of the vacuum energy to create an energy gradient for useful work is the subject of much controversy.
Purported applications
Physicists overwhelmingly reject any possibility that the zero-point energy field can be exploited to obtain useful energy (work) or uncompensated momentum; such efforts are seen as tantamount to perpetual motion machines.
Nevertheless, the allure of free energy has motivated such research, usually falling in the category of fringe science. As long ago as 1889 (before quantum theory or discovery of the zero point energy) Nikola Tesla proposed that useful energy could be obtained from free space, or what was assumed at that time to be an all-pervasive aether. Others have since claimed to exploit zero-point or vacuum energy with a large amount of pseudoscientific literature causing ridicule around the subject. Despite rejection by the scientific community, harnessing zero-point energy remains an interest of research, particularly in the US where it has attracted the attention of major aerospace/defence contractors and the U.S. Department of Defense as well as in China, Germany, Russia and Brazil.
Casimir batteries and engines
A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed; the battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates.
In 1999, Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his thought experiment (Gedankenexperiment) for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved."
Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy than has been input into the system.
In 2008, DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force.
A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation.
Single heat baths
In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties.
There have been a growing number of papers showing that in some instances the classical laws of thermodynamics, such as limits on the Carnot efficiency, can be violated by exploiting negative entropy of quantum fluctuations.
Despite efforts to reconcile quantum mechanics and thermodynamics over the years, their compatibility is still an open fundamental problem. The full extent that quantum properties can alter classical thermodynamic bounds is unknown
Space travel and gravitational shielding
The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutron stars for example, it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak.
In 1963 Robert Forward, a physicist and aerospace engineer at Hughes Research Laboratories, published a paper showing how within the framework of general relativity "anti-gravitational" effects might be achieved. Since all atoms have spin, gravitational permeability may be able to differ from material to material. A strong toroidal gravitational field that acts against the force of gravity could be generated by materials that have nonlinear properties that enhance time-varying gravitational fields. Such an effect would be analogous to the nonlinear electromagnetic permeability of iron, making it an effective core (i.e. the doughnut of iron) in a transformer, whose properties are dependent on magnetic permeability. In 1966 Dewitt was first to identify the significance of gravitational effects in superconductors. Dewitt demonstrated that a magnetic-type gravitational field must result in the presence of fluxoid quantization. In 1983, Dewitt's work was substantially expanded by Ross.
From 1971 to 1974 Henry William Wallace, a scientist at GE Aerospace was issued with three patents. Wallace used Dewitt's theory to develop an experimental apparatus for generating and detecting a secondary gravitational field, which he named the kinemassic field (now better known as the gravitomagnetic field). In his three patents, Wallace describes three different methods used for detection of the gravitomagnetic field – change in the motion of a body on a pivot, detection of a transverse voltage in a semiconductor crystal, and a change in the specific heat of a crystal material having spin-aligned nuclei. There are no publicly available independent tests verifying Wallace's devices. Such an effect if any would be small. Referring to Wallace's patents, a New Scientist article in 1980 stated "Although the Wallace patents were initially ignored as cranky, observers believe that his invention is now under serious but secret investigation by the military authorities in the USA. The military may now regret that the patents have already been granted and so are available for anyone to read." A further reference to Wallace's patents occur in an electric propulsion study prepared for the Astronautics Laboratory at Edwards Air Force Base which states: "The patents are written in a very believable style which include part numbers, sources for some components, and diagrams of data. Attempts were made to contact Wallace using patent addresses and other sources but he was not located nor is there a trace of what became of his work. The concept can be somewhat justified on general relativistic grounds since rotating frames of time varying fields are expected to emit gravitational waves."
In 1986 the U.S. Air Force's then Rocket Propulsion Laboratory (RPL) at Edwards Air Force Base solicited "Non Conventional Propulsion Concepts" under a small business research and innovation program. One of the six areas of interest was "Esoteric energy sources for propulsion, including the quantum dynamic energy of vacuum space..." In the same year BAE Systems launched "Project Greenglow" to provide a "focus for research into novel propulsion systems and the means to power them".
In 1988 Kip Thorne et al. published work showing how traversable wormholes can exist in spacetime only if they are threaded by quantum fields generated by some form of exotic matter that has negative energy. In 1993 Scharnhorst and Barton showed that the speed of a photon will be increased if it travels between two Casimir plates, an example of negative energy. In the most general sense, the exotic matter needed to create wormholes would share the repulsive properties of the inflationary energy, dark energy or zero-point radiation of the vacuum. Building on the work of Thorne, in 1994 Miguel Alcubierre proposed a method for changing the geometry of space by creating a wave that would cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand (see Alcubierre drive). The ship would then ride this wave inside a region of flat space, known as a warp bubble and would not move within this bubble but instead be carried along as the region itself moves due to the actions of the drive.
In 1992 Evgeny Podkletnov published a heavily debated journal article claiming a specific type of rotating superconductor could shield gravitational force. Independently of this, from 1991 to 1993 Ning Li and Douglas Torr published a number of articles about gravitational effects in superconductors. One finding they derived is the source of gravitomagnetic flux in a type II superconductor material is due to spin alignment of the lattice ions. Quoting from their third paper: "It is shown that the coherent alignment of lattice ion spins will generate a detectable gravitomagnetic field, and in the presence of a time-dependent applied magnetic vector potential field, a detectable gravitoelectric field." The claimed size of the generated force has been disputed by some but defended by others. In 1997 Li published a paper attempting to replicate Podkletnov's results and showed the effect was very small, if it existed at all. Li is reported to have left the University of Alabama in 1999 to found the company AC Gravity LLC. AC Gravity was awarded a U.S. Department of Defense grant for $448,970 in 2001 to continue anti-gravity research. The grant period ended in 2002 but no results from this research were made public.
In 2002 Phantom Works, Boeing's advanced research and development facility in Seattle, approached Evgeny Podkletnov directly. Phantom Works was blocked by Russian technology transfer controls. At this time Lieutenant General George Muellner, the outgoing head of the Boeing Phantom Works, confirmed that attempts by Boeing to work with Podkletnov had been blocked by Russian government, lso commenting that "The physical principles – and Podkletnov's device is not the only one – appear to be valid... There is basic science there. They're not breaking the laws of physics. The issue is whether the science can be engineered into something workable"
Froning and Roach (2002) put forward a paper that builds on the work of Puthoff, Haisch and Alcubierre. They used fluid dynamic simulations to model the interaction of a vehicle (like that proposed by Alcubierre) with the zero-point field. Vacuum field perturbations are simulated by fluid field perturbations and the aerodynamic resistance of viscous drag exerted on the interior of the vehicle is compared to the Lorentz force exerted by the zero-point field (a Casimir-like force is exerted on the exterior by unbalanced zero-point radiation pressures). They find that the optimized negative energy required for an Alcubierre drive is where it is a saucer-shaped vehicle with toroidal electromagnetic fields. The EM fields distort the vacuum field perturbations surrounding the craft sufficiently to affect the permeability and permittivity of space.
In 2009, Giorgio Fontana and Bernd Binder presented a new method to potentially extract the Zero-point energy of the electromagnetic field and nuclear forces in the form of gravitational waves. In the spheron model of the nucleus, proposed by the two times Nobel laureate Linus Pauling, dineutrons are among the components of this structure. Similarly to a dumbbell put in a suitable rotational state, but with nuclear mass density, dineutrons are nearly ideal sources of gravitational waves at X-ray and gamma-ray frequencies. The dynamical interplay, mediated by nuclear forces, between the electrically neutral dineutrons and the electrically charged core nucleus is the fundamental mechanism by which nuclear vibrations can be converted to a rotational state of dineutrons with emission of gravitational waves. Gravity and gravitational waves are well described by General Relativity, that is not a quantum theory, this implies that there is no Zero-point energy for gravity in this theory, therefore dineutrons will emit gravitational waves like any other known source of gravitational waves. In Fontana and Binder paper, nuclear species with dynamical instabilites, related to the Zero-point energy of the electromagnetic field and nuclear forces, and possessing dineutrons, will emit gravitational waves. In experimental physics this approach is still unexplored.
In 2014 NASA's Eagleworks Laboratories announced that they had successfully validated the use of a Quantum Vacuum Plasma Thruster which makes use of the Casimir effect for propulsion. In 2016 a scientific paper by the team of NASA scientists passed peer review for the first time. The paper suggests that the zero-point field acts as pilot-wave and that the thrust may be due to particles pushing off the quantum vacuum. While peer review doesn't guarantee that a finding or observation is valid, it does indicate that independent scientists looked over the experimental setup, results, and interpretation and that they could not find any obvious errors in the methodology and that they found the results reasonable. In the paper, the authors identify and discuss nine potential sources of experimental errors, including rogue air currents, leaky electromagnetic radiation, and magnetic interactions. Not all of them could be completely ruled out, and further peer-reviewed experimentation is needed in order to rule these potential errors out.
Zero-point energy in fiction
The concept of Zero-point energy used as an energy source has been an element used in science fiction and related media.
See also
Casimir effect
Ground state
Lamb shift
QED vacuum
QCD vacuum
Quantum fluctuation
Quantum foam
Scalar field
Time crystal
Topological order
Unruh effect
Vacuum energy
Vacuum expectation value
Vacuum state
Virtual particle
References
Notes
Articles in the press
Via Calphysics Institute.
Bibliography
Further reading
Press articles
Journal articles
Books
External links
Nima Arkani-Hamed on the issue of vacuum energy and dark energy.
Steven Weinberg on the cosmological constant problem.
Energy (physics)
Quantum field theory
Quantum electrodynamics
Concepts in physics
Mathematical physics
Condensed matter physics
Materials science
Quantum phases
Non-equilibrium thermodynamics
Perpetual motion
Physical paradoxes
Thermodynamics | Zero-point energy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 17,794 | [
"Physical quantities",
"Phases of matter",
"Quantum mechanics",
"Thermodynamics",
"Dynamical systems",
"Quantum phases",
"Applied mathematics",
"Materials science",
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Theoretical physics",
"Condensed matte... |
84,539 | https://en.wikipedia.org/wiki/Toma%C5%BE%20Pisanski | Tomaž (Tomo) Pisanski (born 24 May 1949 in Ljubljana, Yugoslavia, which is now in Slovenia) is a Slovenian mathematician working mainly in discrete mathematics and graph theory. He is considered by many Slovenian mathematicians to be the "father of Slovenian discrete mathematics."
Biography
As a high school student, Pisanski competed in the 1966 and 1967 International Mathematical Olympiads as a member of the Yugoslav team, winning a bronze medal in 1967. He studied at the University of Ljubljana where he obtained a B.Sc, M.Sc and PhD in mathematics. His 1981 PhD thesis in topological graph theory was written under the guidance of Torrence Parsons. He also obtained an M.Sc. in computer science from Pennsylvania State University in 1979.
Currently, Pisanski is a professor of discrete and computational mathematics and Head of the Department of Information Sciences and Technology at University of Primorska in Koper. In addition, he is a professor at the University of Ljubljana Faculty of Mathematics and Physics (FMF). He has been a member of the Institute of Mathematics, Physics and Mechanics (IMFM) in Ljubljana since 1980, and the leader of several IMFM research projects. In 1991 he established the Department of Theoretical Computer Science at IMFM, of which he has served as both head and deputy head.
He has taught undergraduate and graduate courses in mathematics and computer science at the University of Ljubljana, University of Zagreb, University of Udine, University of Leoben, California State University, Chico, Simon Fraser University, University of Auckland and Colgate University. Pisanski has been an adviser for M.Sc and PhD students in both mathematics and computer science. Notable students include John Shawe-Taylor (B.Sc in Ljubljana), Vladimir Batagelj, Bojan Mohar, Sandi Klavžar, and Sandra Sattolo (M.Sc in Udine).
Research
Pisanski’s research interests span several areas of discrete and computational mathematics, including combinatorial configurations, abstract polytopes, maps on surfaces, chemical graph theory, and the history of mathematics and science. In 1980 he calculated the genus of the Cartesian product of any pair of connected, bipartite, d-valent graphs using a method that was later called the White–Pisanski method. In 1982 Vladimir Batagelj and Pisanski proved that the Cartesian product of a tree and a cycle is Hamiltonian if and only if no degree of the tree exceeds the length of the cycle. They also proposed a conjecture concerning cyclic Hamiltonicity of graphs. Their conjecture was proved in 2005. With Brigitte Servatius he is the co-author of the book Configurations from a Graphical Viewpoint (2013).
Selected publications
Pisanski, T. Genus of Cartesian products of regular bipartite graphs, Journal of Graph Theory 4 (1), 1980, 31-42. doi:10.1002/jgt.3190040105
Graovac, A., T. Pisanski. On the Wiener index of a graph, Journal of Mathematical Chemistry 8 (1),1991, 53-62. doi:10.1007/BF01166923
Boben, M., B. Grunbaum, T. Pisanski, A. Zitnik, Small triangle-free configurations of points and lines, Discrete & Computational Geometry 35 (3), 2006, 405-427. doi:10.1007/s00454-005-1224-9
Conder, M., I. Hubard, T. Pisanski. Constructions for chiral polytopes, Journal of the London Mathematical Society 77 (1), 2007, 115-129. doi:10.1112/jlms/jdm093
Pisanski, T. A classification of cubic bicirculants, Discrete Mathematics 307 (3-5), 2007, 567-578. doi:10.1016/j.disc.2005.09.053
Professional life
From 1998-1999, Pisanski was chairman of the Society of Mathematicians, Physicists and Astronomers of Slovenia (DMFA Slovenije); he was appointed an honorary member in 2015. He is a founding member of the International Academy of Mathematical Chemistry, serving as its vice president from 2007 to 2011. In 2008, together with Dragan Marušič, he founded Ars Mathematica Contemporanea, the first international mathematical journal to be published in Slovenia. In 2012 he was elected to the Academia Europaea. He is currently president of the Slovenian Discrete and Applied Mathematics Society (SDAMS), the first Eastern European mathematical society not wholly devoted to theoretical mathematics to be accepted as a full member of the European Mathematical Society (EMS).
Awards and honors
In 2005, Pisanski was decorated with the Order of Merit (Slovenia), and in 2015 he received the Zois award for exceptional contributions to discrete mathematics and its applications. In 2016, he received the Donald Michie and Alan Turing Prize for lifetime achievements in Information Science in Slovenia.
References
External links
Pisanski's CV
International Academy of Mathematical Chemistry - List of Members
Slovenian Academy of Engineering - List of Members
Images of Knowledge: Tomaž Pisanski - RTV radio interview
8th European Congress of Mathematics website
Maps ∩ Configurations ∩ Polytopes ∩ Molecules ⊆ Graphs: The mathematics of Tomaž Pisanski on the occasion of his 70th birthday
Ars Mathematica Contemporanea website
Slovenian Society for Discrete and Applied Mathematics (SDAMS) website
1949 births
20th-century Slovenian mathematicians
21st-century Slovenian mathematicians
Graph theorists
Living people
Pennsylvania State University alumni
Slovenian computer scientists
Scientists from Ljubljana
Mathematical chemistry
University of Ljubljana alumni
Members of Academia Europaea
Academic staff of the University of Ljubljana
Academic staff of the University of Primorska
Academic staff of the University of Zagreb
Academic staff of Montanuniversität Leoben
California State University, Chico faculty
Academic staff of Simon Fraser University
Academic staff of the University of Auckland
Colgate University faculty
International Mathematical Olympiad participants
Computational chemists
Yugoslav mathematicians | Tomaž Pisanski | [
"Chemistry",
"Mathematics"
] | 1,247 | [
"Drug discovery",
"Applied mathematics",
"Graph theory",
"Computational chemists",
"Molecular modelling",
"Mathematical chemistry",
"Computational chemistry",
"Theoretical chemists",
"Theoretical chemistry",
"Mathematical relations",
"Graph theorists"
] |
85,151 | https://en.wikipedia.org/wiki/Cloacina | Cloacina was a goddess who presided over the Cloaca Maxima ('Greatest Drain'), the main interceptor discharge outfall of the system of sewers in Rome.
Name
The theonym Cloācīna is a derivative of the noun cloāca ('sewer, underground drainage'; cf. cluere 'to purify'), itself from Proto-Italic *klowā-, ultimately from Proto-Indo-European *ḱleuH-o- ('clean'). A cult-title of Venus, Cloācīna may be interpreted as meaning 'The Purifier'.
In later English works, phrases such as "the temple of Cloacina" were sometimes used as euphemisms for the toilet.
Cult
The Cloaca Maxima was said to have been begun by Tarquinius Priscus, one of Rome's Etruscan kings, and finished by another, Tarquinius Superbus: Cloacina might have originally been an Etruscan deity. According to one of Rome's foundation myths, Titus Tatius, king of the Sabines, erected a statue to Cloacina at the place where Romans and Sabines met to confirm the end of their conflict, following the rape of the Sabine women. Tatius instituted lawful marriage between Sabines and Romans, uniting them as one people, ruled by himself and by Rome's founder, Romulus. The peace between Sabines and Romans was marked by a cleansing ritual using myrtle, at or very near an ancient Etruscan shrine to Cloacina, above a small stream that would later be enlarged as the main outlet for Rome's main sewer, the Cloaca Maxima. As myrtle was one of Venus' signs, and Venus was a goddess of union, peace and reconciliation, Cloacina was recognised as Venus Cloacina (Venus the Cleanser). She was also credited with the purification of sexual intercourse within marriage.
The small, circular shrine of Venus Cloacina was situated before the Basilica Aemilia on the Roman Forum and directly above the Cloaca Maxima. Some Roman coins had images of Cloacina's shrine. The clearest show two females, presumed to be deities, each with a bird perched on a pillar. One holds a small object, possibly a flower; birds and flowers are signs of Venus, among other deities. The figures may have represented the two aspects of the divinity, Cloacina-Venus.
References
Bibliography
Further reading
Information on Cloacina
See also
Toilet god
Mefitis
Love and lust goddesses
Roman goddesses
Toilet goddesses
Sewerage | Cloacina | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 540 | [
"Sewerage",
"Water pollution",
"Environmental engineering"
] |
85,331 | https://en.wikipedia.org/wiki/High%20pressure | In science and engineering the study of high pressure examines its effects on materials and the design and construction of devices, such as a diamond anvil cell, which can create high pressure. High pressure usually means pressures of thousands (kilobars) or millions (megabars) of times atmospheric pressure (about 1 bar or 100,000 Pa).
History and overview
Percy Williams Bridgman received a Nobel Prize in 1946 for advancing this area of physics by two magnitudes of pressure (400 MPa to 40 GPa). The list of founding fathers of this field includes also the names of Harry George Drickamer, Tracy Hall, Francis P. Bundy, , and .
It was by applying high pressure as well as high temperature to carbon that synthetic diamonds were first produced alongside many other interesting discoveries. Almost any material when subjected to high pressure will compact itself into a denser form, for example, quartz (also called silica or silicon dioxide) will first adopt a denser form known as coesite, then upon application of even higher pressure, form stishovite. These two forms of silica were first discovered by high-pressure experimenters, but then found in nature at the site of a meteor impact.
Chemical bonding is likely to change under high pressure, when the P*V term in the free energy becomes comparable to the energies of typical chemical bonds – i.e. at around 100 GPa. Among the most striking changes are metallization of oxygen at 96 GPa (rendering oxygen a superconductor), and transition of sodium from a nearly-free-electron metal to a transparent insulator at ~200 GPa. At ultimately high compression, however, all materials will metallize.
High-pressure experimentation has led to the discovery of the types of minerals which are believed to exist in the deep mantle of the Earth, such as silicate perovskite, which is thought to make up half of the Earth's bulk, and post-perovskite, which occurs at the core-mantle boundary and explains many anomalies inferred for that region.
Pressure "landmarks": typical pressures reached by large-volume presses are up to 30–40 GPa, pressures that can be generated inside diamond anvil cells are ~1000 GPa, pressure in the center of the Earth is 364 GPa, and highest pressures ever achieved in shock waves are over 100,000 GPa.
See also
Synthetic diamond
D-DIA
References
Further reading
Materials science
Pressure | High pressure | [
"Physics",
"Materials_science",
"Engineering"
] | 513 | [
"Scalar physical quantities",
"Mechanical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Pressure",
"Materials science",
"nan",
"Wikipedia categories named after physical quantities"
] |
85,411 | https://en.wikipedia.org/wiki/PH%20indicator | A pH indicator is a halochromic chemical compound added in small amounts to a solution so the pH (acidity or basicity) of the solution can be determined visually or spectroscopically by changes in absorption and/or emission properties. Hence, a pH indicator is a chemical detector for hydronium ions (H3O+) or hydrogen ions (H+) in the Arrhenius model.
Normally, the indicator causes the color of the solution to change depending on the pH. Indicators can also show change in other physical properties; for example, olfactory indicators show change in their odor. The pH value of a neutral solution is 7.0 at 25°C (standard laboratory conditions). Solutions with a pH value below 7.0 are considered acidic and solutions with pH value above 7.0 are basic. Since most naturally occurring organic compounds are weak electrolytes, such as carboxylic acids and amines, pH indicators find many applications in biology and analytical chemistry. Moreover, pH indicators form one of the three main types of indicator compounds used in chemical analysis. For the quantitative analysis of metal cations, the use of complexometric indicators is preferred, whereas the third compound class, the redox indicators, are used in redox titrations (titrations involving one or more redox reactions as the basis of chemical analysis).
Theory
In and of themselves, pH indicators are usually weak acids or weak bases. The general reaction scheme of acidic pH indicators in aqueous solutions can be formulated as:
HInd(aq) + (l) (aq) + (aq)
where, "HInd" is the acidic form and "Ind−" is the conjugate base of the indicator.
Vice versa for basic pH indicators in aqueous solutions:
IndOH(aq) + (l) (l) + (aq) + (aq)
where "IndOH" stands for the basic form and "Ind+" for the conjugate acid of the indicator.
The ratio of concentration of conjugate acid/base to concentration of the acidic/basic indicator determines the pH (or pOH) of the solution and connects the color to the pH (or pOH) value. For pH indicators that are weak electrolytes, the Henderson–Hasselbalch equation can be written as:
pH = pKa + log10
pOH = pKb + log10
The equations, derived from the acidity constant and basicity constant, states that when pH equals the pKa or pKb value of the indicator, both species are present in a 1:1 ratio. If pH is above the pKa or pKb value, the concentration of the conjugate base is greater than the concentration of the acid, and the color associated with the conjugate base dominates. If pH is below the pKa or pKb value, the converse is true.
Usually, the color change is not instantaneous at the pKa or pKb value, but a pH range exists where a mixture of colors is present. This pH range varies between indicators, but as a rule of thumb, it falls between the pKa or pKb value plus or minus one. This assumes that solutions retain their color as long as at least 10% of the other species persists. For example, if the concentration of the conjugate base is 10 times greater than the concentration of the acid, their ratio is 10:1, and consequently the pH is pKa + 1 or pKb + 1. Conversely, if a 10-fold excess of the acid occurs with respect to the base, the ratio is 1:10 and the pH is pKa − 1 or pKb − 1.
For optimal accuracy, the color difference between the two species should be as clear as possible, and the narrower the pH range of the color change the better. In some indicators, such as phenolphthalein, one of the species is colorless, whereas in other indicators, such as methyl red, both species confer a color. While pH indicators work efficiently at their designated pH range, they are usually destroyed at the extreme ends of the pH scale due to undesired side reactions.
Application
pH indicators are frequently employed in titrations in analytical chemistry and biology to determine the extent of a chemical reaction. Because of the subjective choice (determination) of color, pH indicators are susceptible to imprecise readings. For applications requiring precise measurement of pH, a pH meter is frequently used. Sometimes, a blend of different indicators is used to achieve several smooth color changes over a wide range of pH values. These commercial indicators (e.g., universal indicator and Hydrion papers) are used when only rough knowledge of pH is necessary. For a titration, the difference between the true endpoint and the indicated endpoint is called the indicator error.
Tabulated below are several common laboratory pH indicators. Indicators usually exhibit intermediate colors at pH values inside the listed transition range. For example, phenol red exhibits an orange color between pH 6.8 and pH 8.4. The transition range may shift slightly depending on the concentration of the indicator in the solution and on the temperature at which it is used. The figure on the right shows indicators with their operation range and color changes.
Universal Indicator
Precise pH measurement
An indicator may be used to obtain quite precise measurements of pH by measuring absorbance quantitatively at two or more wavelengths. The principle can be illustrated by taking the indicator to be a simple acid, HA, which dissociates into H+ and A−.
HA H+ + A−
The value of the acid dissociation constant, pKa, must be known. The molar absorbances, εHA and εA− of the two species HA and A− at wavelengths λx and λy must also have been determined by previous experiment. Assuming Beer's law to be obeyed, the measured absorbances Ax and Ay at the two wavelengths are simply the sum of the absorbances due to each species.
These are two equations in the two concentrations [HA] and [A−]. Once solved, the pH is obtained as
If measurements are made at more than two wavelengths, the concentrations [HA] and [A−] can be calculated by linear least squares. In fact, a whole spectrum may be used for this purpose. The process is illustrated for the indicator bromocresol green. The observed spectrum (green) is the sum of the spectra of HA (gold) and of A− (blue), weighted for the concentration of the two species.
When a single indicator is used, this method is limited to measurements in the pH range pKa ± 1, but this range can be extended by using mixtures of two or more indicators. Because indicators have intense absorption spectra, the indicator concentration is relatively low, and the indicator itself is assumed to have a negligible effect on pH.
Equivalence point
In acid-base titrations, an unfitting pH indicator may induce a color change in the indicator-containing solution before or after the actual equivalence point. As a result, different equivalence points for a solution can be concluded based on the pH indicator used. This is because the slightest color change of the indicator-containing solution suggests the equivalence point has been reached. Therefore, the most suitable pH indicator has an effective pH range, where the change in color is apparent, that encompasses the pH of the equivalence point of the solution being titrated.
Naturally occurring pH indicators
Many plants or plant parts contain chemicals from the naturally colored anthocyanin family of compounds. They are red in acidic solutions and blue in basic. Anthocyanins can be extracted with water or other solvents from a multitude of colored plants and plant parts, including from leaves (red cabbage); flowers (geranium, poppy, or rose petals); berries (blueberries, blackcurrant); and stems (rhubarb). Extracting anthocyanins from household plants, especially red cabbage, to form a crude pH indicator is a popular introductory chemistry demonstration.
Litmus, used by alchemists in the Middle Ages and still readily available, is a naturally occurring pH indicator made from a mixture of lichen species, particularly Roccella tinctoria. The word litmus is literally from 'colored moss' in Old Norse (see Litr). The color changes between red in acid solutions and blue in alkalis. The term 'litmus test' has become a widely used metaphor for any test that purports to distinguish authoritatively between alternatives.
Hydrangea macrophylla flowers can change color depending on soil acidity. In acid soils, chemical reactions occur in the soil that make aluminium available to these plants, turning the flowers blue. In alkaline soils, these reactions cannot occur and therefore aluminium is not taken up by the plant. As a result, the flowers remain pink.
Another natural pH indicator is the spice turmeric. It turns yellow when exposed to acids and reddish brown when in presence of an alkalis.
See also
Chromophore
Fecal pH test
Nitrazine
Universal indicator
References
External links
Long indicator list,
Chemical indicators
Dyes
Equilibrium chemistry
Titration | PH indicator | [
"Chemistry",
"Materials_science"
] | 1,895 | [
"Titration",
"Instrumental analysis",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
85,425 | https://en.wikipedia.org/wiki/Metalloid | A metalloid is a chemical element which has a preponderance of properties in between, or that are a mixture of, those of metals and nonmetals. The word metalloid comes from the Latin metallum ("metal") and the Greek oeides ("resembling in form or appearance"). There is no standard definition of a metalloid and no complete agreement on which elements are metalloids. Despite the lack of specificity, the term remains in use in the literature.
The six commonly recognised metalloids are boron, silicon, germanium, arsenic, antimony and tellurium. Five elements are less frequently so classified: carbon, aluminium, selenium, polonium and astatine. On a standard periodic table, all eleven elements are in a diagonal region of the p-block extending from boron at the upper left to astatine at lower right. Some periodic tables include a dividing line between metals and nonmetals, and the metalloids may be found close to this line.
Typical metalloids have a metallic appearance, may be brittle and are only fair conductors of electricity. They can form alloys with metals, and many of their other physical properties and chemical properties are intermediate between those of metallic and nonmetallic elements. They and their compounds are used in alloys, biological agents, catalysts, flame retardants, glasses, optical storage and optoelectronics, pyrotechnics, semiconductors, and electronics.
The term metalloid originally referred to nonmetals. Its more recent meaning, as a category of elements with intermediate or hybrid properties, became widespread in 1940–1960. Metalloids are sometimes called semimetals, a practice that has been discouraged, as the term semimetal has a more common usage as a specific kind of electronic band structure of a substance. In this context, only arsenic and antimony are semimetals, and commonly recognised as metalloids.
Definitions
Judgment-based
A metalloid is an element that possesses a preponderance of properties in between, or that are a mixture of, those of metals and nonmetals, and which is therefore hard to classify as either a metal or a nonmetal. This is a generic definition that draws on metalloid attributes consistently cited in the literature. Difficulty of categorisation is a key attribute. Most elements have a mixture of metallic and nonmetallic properties, and can be classified according to which set of properties is more pronounced. Only the elements at or near the margins, lacking a sufficiently clear preponderance of either metallic or nonmetallic properties, are classified as metalloids.
Boron, silicon, germanium, arsenic, antimony, and tellurium are commonly recognised as metalloids. Depending on the author, one or more from selenium, polonium, or astatine are sometimes added to the list. Boron sometimes is excluded, by itself, or with silicon. Sometimes tellurium is not regarded as a metalloid. The inclusion of antimony, polonium, and astatine as metalloids has been questioned.
Other elements are occasionally classified as metalloids. These elements include hydrogen, beryllium, nitrogen, phosphorus, sulfur, zinc, gallium, tin, iodine, lead, bismuth, and radon. The term metalloid has also been used for elements that exhibit metallic lustre and electrical conductivity, and that are amphoteric, such as arsenic, antimony, vanadium, chromium, molybdenum, tungsten, tin, lead, and aluminium. The p-block metals, and nonmetals (such as carbon or nitrogen) that can form alloys with metals or modify their properties have also occasionally been considered as metalloids.
Criteria-based
No widely accepted definition of a metalloid exists, nor any division of the periodic table into metals, metalloids, and nonmetals; Hawkes questioned the feasibility of establishing a specific definition, noting that anomalies can be found in several attempted constructs. Classifying an element as a metalloid has been described by Sharp as "arbitrary".
The number and identities of metalloids depend on what classification criteria are used. Emsley recognised four metalloids (germanium, arsenic, antimony, and tellurium); James et al. listed twelve (Emsley's plus boron, carbon, silicon, selenium, bismuth, polonium, moscovium, and livermorium). On average, seven elements are included in such lists; individual classification arrangements tend to share common ground and vary in the ill-defined margins.
A single quantitative criterion such as electronegativity is commonly used, metalloids having electronegativity values from 1.8 or 1.9 to 2.2. Further examples include packing efficiency (the fraction of volume in a crystal structure occupied by atoms) and the Goldhammer–Herzfeld criterion ratio. The commonly recognised metalloids have packing efficiencies of between 34% and 41%. The Goldhammer–Herzfeld ratio, roughly equal to the cube of the atomic radius divided by the molar volume, is a simple measure of how metallic an element is, the recognised metalloids having ratios from around 0.85 to 1.1 and averaging 1.0.
Other authors have relied on, for example, atomic conductance or bulk coordination number.
Jones, writing on the role of classification in science, observed that "[classes] are usually defined by more than two attributes". Masterton and Slowinski used three criteria to describe the six elements commonly recognised as metalloids: metalloids have ionization energies around 200 kcal/mol (837 kJ/mol) and electronegativity values close to 2.0. They also said that metalloids are typically semiconductors, though antimony and arsenic (semimetals from a physics perspective) have electrical conductivities approaching those of metals. Selenium and polonium are suspected as not in this scheme, while astatine's status is uncertain.
In this context, Vernon proposed that a metalloid is a chemical element that, in its standard state, has (a) the electronic band structure of a semiconductor or a semimetal; and (b) an intermediate first ionization potential "(say 750−1,000 kJ/mol)"; and (c) an intermediate electronegativity (1.9–2.2).
Periodic table territory
Location
Metalloids lie on either side of the dividing line between metals and nonmetals. This can be found, in varying configurations, on some periodic tables. Elements to the lower left of the line generally display increasing metallic behaviour; elements to the upper right display increasing nonmetallic behaviour. When presented as a regular stairstep, elements with the highest critical temperature for their groups (Li, Be, Al, Ge, Sb, Po) lie just below the line.
The diagonal positioning of the metalloids represents an exception to the observation that elements with similar properties tend to occur in vertical groups. A related effect can be seen in other diagonal similarities between some elements and their lower right neighbours, specifically lithium-magnesium, beryllium-aluminium, and boron-silicon. Rayner-Canham has argued that these similarities extend to carbon-phosphorus, nitrogen-sulfur, and into three d-block series.
This exception arises due to competing horizontal and vertical trends in the nuclear charge. Going along a period, the nuclear charge increases with atomic number as do the number of electrons. The additional pull on outer electrons as nuclear charge increases generally outweighs the screening effect of having more electrons. With some irregularities, atoms therefore become smaller, ionization energy increases, and there is a gradual change in character, across a period, from strongly metallic, to weakly metallic, to weakly nonmetallic, to strongly nonmetallic elements. Going down a main group, the effect of increasing nuclear charge is generally outweighed by the effect of additional electrons being further away from the nucleus. Atoms generally become larger, ionization energy falls, and metallic character increases. The net effect is that the location of the metal–nonmetal transition zone shifts to the right in going down a group, and analogous diagonal similarities are seen elsewhere in the periodic table, as noted.
Alternative treatments
Elements bordering the metal–nonmetal dividing line are not always classified as metalloids, noting a binary classification can facilitate the establishment of rules for determining bond types between metals and nonmetals. In such cases, the authors concerned focus on one or more attributes of interest to make their classification decisions, rather than being concerned about the marginal nature of the elements in question. Their considerations may or not be made explicit and may, at times, seem arbitrary. Metalloids may be grouped with metals; or regarded as nonmetals; or treated as a sub-category of nonmetals. Other authors have suggested classifying some elements as metalloids "emphasizes that properties change gradually rather than abruptly as one moves across or down the periodic table". Some periodic tables distinguish elements that are metalloids and display no formal dividing line between metals and nonmetals. Metalloids are instead shown as occurring in a diagonal band or diffuse region. The key consideration is to explain the context for the taxonomy in use.
Properties
Metalloids usually look like metals but behave largely like nonmetals. Physically, they are shiny, brittle solids with intermediate to relatively good electrical conductivity and the electronic band structure of a semimetal or semiconductor. Chemically, they mostly behave as (weak) nonmetals, have intermediate ionization energies and electronegativity values, and amphoteric or weakly acidic oxides. Most of their other physical and chemical properties are intermediate in nature.
Compared to metals and nonmetals
Characteristic properties of metals, metalloids, and nonmetals are summarized in the table. Physical properties are listed in order of ease of determination; chemical properties run from general to specific, and then to descriptive.
The above table reflects the hybrid nature of metalloids. The properties of form, appearance, and behaviour when mixed with metals are more like metals. Elasticity and general chemical behaviour are more like nonmetals. Electrical conductivity, band structure, ionization energy, electronegativity, and oxides are intermediate between the two.
Common applications
The focus of this section is on the recognised metalloids. Elements less often recognised as metalloids are ordinarily classified as either metals or nonmetals; some of these are included here for comparative purposes.
Metalloids are too brittle to have any structural uses in their pure forms. They and their compounds are used in alloys, biological agents (toxicological, nutritional, and medicinal), catalysts, flame retardants, glasses (oxide and metallic), optical storage media and optoelectronics, pyrotechnics, semiconductors, and electronics.
Alloys
Writing early in the history of intermetallic compounds, the British metallurgist Cecil Desch observed that "certain non-metallic elements are capable of forming compounds of distinctly metallic character with metals, and these elements may therefore enter into the composition of alloys". He associated silicon, arsenic, and tellurium, in particular, with the alloy-forming elements. Phillips and Williams suggested that compounds of silicon, germanium, arsenic, and antimony with B metals, "are probably best classed as alloys".
Among the lighter metalloids, alloys with transition metals are well-represented. Boron can form intermetallic compounds and alloys with such metals of the composition MnB, if n > 2. Ferroboron (15% boron) is used to introduce boron into steel; nickel-boron alloys are ingredients in welding alloys and case hardening compositions for the engineering industry. Alloys of silicon with iron and with aluminium are widely used by the steel and automotive industries, respectively. Germanium forms many alloys, most importantly with the coinage metals.
The heavier metalloids continue the theme. Arsenic can form alloys with metals, including platinum and copper; it is also added to copper and its alloys to improve corrosion resistance and appears to confer the same benefit when added to magnesium. Antimony is well known as an alloy-former, including with the coinage metals. Its alloys include pewter (a tin alloy with up to 20% antimony) and type metal (a lead alloy with up to 25% antimony). Tellurium readily alloys with iron, as ferrotellurium (50–58% tellurium), and with copper, in the form of copper tellurium (40–50% tellurium). Ferrotellurium is used as a stabilizer for carbon in steel casting. Of the non-metallic elements less often recognised as metalloids, selenium – in the form of ferroselenium (50–58% selenium) – is used to improve the machinability of stainless steels.
Biological agents
All six of the elements commonly recognised as metalloids have toxic, dietary or medicinal properties. Arsenic and antimony compounds are especially toxic; boron, silicon, and possibly arsenic, are essential trace elements. Boron, silicon, arsenic, and antimony have medical applications, and germanium and tellurium are thought to have potential.
Boron is used in insecticides and herbicides. It is an essential trace element. As boric acid, it has antiseptic, antifungal, and antiviral properties.
Silicon is present in silatrane, a highly toxic rodenticide. Long-term inhalation of silica dust causes silicosis, a fatal disease of the lungs. Silicon is an essential trace element. Silicone gel can be applied to badly burned patients to reduce scarring.
Salts of germanium are potentially harmful to humans and animals if ingested on a prolonged basis. There is interest in the pharmacological actions of germanium compounds but no licensed medicine as yet.
Arsenic is notoriously poisonous and may also be an essential element in ultratrace amounts. During World War I, both sides used "arsenic-based sneezing and vomiting agents…to force enemy soldiers to remove their gas masks before firing mustard or phosgene at them in a second salvo." It has been used as a pharmaceutical agent since antiquity, including for the treatment of syphilis before the development of antibiotics. Arsenic is also a component of melarsoprol, a medicinal drug used in the treatment of human African trypanosomiasis or sleeping sickness. In 2003, arsenic trioxide (under the trade name Trisenox) was re-introduced for the treatment of acute promyelocytic leukaemia, a cancer of the blood and bone marrow. Arsenic in drinking water, which causes lung and bladder cancer, has been associated with a reduction in breast cancer mortality rates.
Metallic antimony is relatively non-toxic, but most antimony compounds are poisonous.
Two antimony compounds, sodium stibogluconate and stibophen, are used as antiparasitical drugs.
Elemental tellurium is not considered particularly toxic; two grams of sodium tellurate, if administered, can be lethal. People exposed to small amounts of airborne tellurium exude a foul and persistent garlic-like odour. Tellurium dioxide has been used to treat seborrhoeic dermatitis; other tellurium compounds were used as antimicrobial agents before the development of antibiotics. In the future, such compounds may need to be substituted for antibiotics that have become ineffective due to bacterial resistance.
Of the elements less often recognised as metalloids, beryllium and lead are noted for their toxicity; lead arsenate has been extensively used as an insecticide. Sulfur is one of the oldest of the fungicides and pesticides. Phosphorus, sulfur, zinc, selenium, and iodine are essential nutrients, and aluminium, tin, and lead may be. Sulfur, gallium, selenium, iodine, and bismuth have medicinal applications. Sulfur is a constituent of sulfonamide drugs, still widely used for conditions such as acne and urinary tract infections. Gallium nitrate is used to treat the side effects of cancer; gallium citrate, a radiopharmaceutical, facilitates imaging of inflamed body areas. Selenium sulfide is used in medicinal shampoos and to treat skin infections such as tinea versicolor. Iodine is used as a disinfectant in various forms. Bismuth is an ingredient in some antibacterials.
Catalysts
Boron trifluoride and trichloride are used as homogeneous catalysts in organic synthesis and electronics; the tribromide is used in the manufacture of diborane. Non-toxic boron ligands could replace toxic phosphorus ligands in some transition metal catalysts. Silica sulfuric acid (SiO2OSO3H) is used in organic reactions. Germanium dioxide is sometimes used as a catalyst in the production of PET plastic for containers; cheaper antimony compounds, such as the trioxide or triacetate, are more commonly employed for the same purpose despite concerns about antimony contamination of food and drinks. Arsenic trioxide has been used in the production of natural gas, to boost the removal of carbon dioxide, as have selenous acid and tellurous acid. Selenium acts as a catalyst in some microorganisms. Tellurium, its dioxide, and its tetrachloride are strong catalysts for air oxidation of carbon above 500 °C. Graphite oxide can be used as a catalyst in the synthesis of imines and their derivatives. Activated carbon and alumina have been used as catalysts for the removal of sulfur contaminants from natural gas. Titanium doped aluminium has been suggested as a substitute for noble metal catalysts used in the production of industrial chemicals.
Flame retardants
Compounds of boron, silicon, arsenic, and antimony have been used as flame retardants. Boron, in the form of borax, has been used as a textile flame retardant since at least the 18th century. Silicon compounds such as silicones, silanes, silsesquioxane, silica, and silicates, some of which were developed as alternatives to more toxic halogenated products, can considerably improve the flame retardancy of plastic materials.
Arsenic compounds such as sodium arsenite or sodium arsenate are effective flame retardants for wood but have been less frequently used due to their toxicity. Antimony trioxide is a flame retardant. Aluminium hydroxide has been used as a wood-fibre, rubber, plastic, and textile flame retardant since the 1890s. Apart from aluminium hydroxide, use of phosphorus based flame-retardants – in the form of, for example, organophosphates – now exceeds that of any of the other main retardant types. These employ boron, antimony, or halogenated hydrocarbon compounds.
Glass formation
The oxides B2O3, SiO2, GeO2, As2O3, and Sb2O3 readily form glasses. TeO2 forms a glass but this requires a "heroic quench rate" or the addition of an impurity; otherwise the crystalline form results. These compounds are used in chemical, domestic, and industrial glassware and optics. Boron trioxide is used as a glass fibre additive, and is also a component of borosilicate glass, widely used for laboratory glassware and domestic ovenware for its low thermal expansion. Most ordinary glassware is made from silicon dioxide. Germanium dioxide is used as a glass fibre additive, as well as in infrared optical systems. Arsenic trioxide is used in the glass industry as a decolourizing and fining agent (for the removal of bubbles), as is antimony trioxide. Tellurium dioxide finds application in laser and nonlinear optics.
Amorphous metallic glasses are generally most easily prepared if one of the components is a metalloid or "near metalloid" such as boron, carbon, silicon, phosphorus or germanium. Aside from thin films deposited at very low temperatures, the first known metallic glass was an alloy of composition Au75Si25 reported in 1960. A metallic glass having a strength and toughness not previously seen, of composition Pd82.5P6Si9.5Ge2, was reported in 2011.
Phosphorus, selenium, and lead, which are less often recognised as metalloids, are also used in glasses. Phosphate glass has a substrate of phosphorus pentoxide (P2O5), rather than the silica (SiO2) of conventional silicate glasses. It is used, for example, to make sodium lamps. Selenium compounds can be used both as decolourising agents and to add a red colour to glass. Decorative glassware made of traditional lead glass contains at least 30% lead(II) oxide (PbO); lead glass used for radiation shielding may have up to 65% PbO. Lead-based glasses have also been extensively used in electronic components, enamelling, sealing and glazing materials, and solar cells. Bismuth based oxide glasses have emerged as a less toxic replacement for lead in many of these applications.
Optical storage and optoelectronics
Varying compositions of GeSbTe ("GST alloys") and Ag- and In- doped Sb2Te ("AIST alloys"), being examples of phase-change materials, are widely used in rewritable optical discs and phase-change memory devices. By applying heat, they can be switched between amorphous (glassy) and crystalline states. The change in optical and electrical properties can be used for information storage purposes. Future applications for GeSbTe may include, "ultrafast, entirely solid-state displays with nanometre-scale pixels, semi-transparent 'smart' glasses, 'smart' contact lenses, and artificial retina devices."
Pyrotechnics
The recognised metalloids have either pyrotechnic applications or associated properties. Boron and silicon are commonly encountered; they act somewhat like metal fuels. Boron is used in pyrotechnic initiator compositions (for igniting other hard-to-start compositions), and in delay compositions that burn at a constant rate. Boron carbide has been identified as a possible replacement for more toxic barium or hexachloroethane mixtures in smoke munitions, signal flares, and fireworks. Silicon, like boron, is a component of initiator and delay mixtures. Doped germanium can act as a variable speed thermite fuel. Arsenic trisulfide As2S3 was used in old naval signal lights; in fireworks to make white stars; in yellow smoke screen mixtures; and in initiator compositions. Antimony trisulfide Sb2S3 is found in white-light fireworks and in flash and sound mixtures. Tellurium has been used in delay mixtures and in blasting cap initiator compositions.
Carbon, aluminium, phosphorus, and selenium continue the theme. Carbon, in black powder, is a constituent of fireworks rocket propellants, bursting charges, and effects mixtures, and military delay fuses and igniters. Aluminium is a common pyrotechnic ingredient, and is widely employed for its capacity to generate light and heat, including in thermite mixtures. Phosphorus can be found in smoke and incendiary munitions, paper caps used in toy guns, and party poppers. Selenium has been used in the same way as tellurium.
Semiconductors and electronics
All the elements commonly recognised as metalloids (or their compounds) have been used in the semiconductor or solid-state electronic industries.
Some properties of boron have limited its use as a semiconductor. It has a high melting point, single crystals are relatively hard to obtain, and introducing and retaining controlled impurities is difficult.
Silicon is the leading commercial semiconductor; it forms the basis of modern electronics (including standard solar cells) and information and communication technologies. This was despite the study of semiconductors, early in the 20th century, having been regarded as the "physics of dirt" and not deserving of close attention.
Germanium has largely been replaced by silicon in semiconducting devices, being cheaper, more resilient at higher operating temperatures, and easier to work during the microelectronic fabrication process. Germanium is still a constituent of semiconducting silicon-germanium "alloys" and these have been growing in use, particularly for wireless communication devices; such alloys exploit the higher carrier mobility of germanium. The synthesis of gram-scale quantities of semiconducting germanane was reported in 2013. This consists of one-atom thick sheets of hydrogen-terminated germanium atoms, analogous to graphane. It conducts electrons more than ten times faster than silicon and five times faster than germanium, and is thought to have potential for optoelectronic and sensing applications. The development of a germanium-wire based anode that more than doubles the capacity of lithium-ion batteries was reported in 2014. In the same year, Lee et al. reported that defect-free crystals of graphene large enough to have electronic uses could be grown on, and removed from, a germanium substrate.
Arsenic and antimony are not semiconductors in their standard states. Both form type III-V semiconductors (such as GaAs, AlSb or GaInAsSb) in which the average number of valence electrons per atom is the same as that of Group 14 elements, but they have direct band gaps. These compounds are preferred for optical applications. Antimony nanocrystals may enable lithium-ion batteries to be replaced by more powerful sodium ion batteries.
Tellurium, which is a semiconductor in its standard state, is used mainly as a component in type II/VI semiconducting-chalcogenides; these have applications in electro-optics and electronics. Cadmium telluride (CdTe) is used in solar modules for its high conversion efficiency, low manufacturing costs, and large band gap of 1.44 eV, letting it absorb a wide range of wavelengths. Bismuth telluride (Bi2Te3), alloyed with selenium and antimony, is a component of thermoelectric devices used for refrigeration or portable power generation.
Five metalloids – boron, silicon, germanium, arsenic, and antimony – can be found in cell phones (along with at least 39 other metals and nonmetals). Tellurium is expected to find such use. Of the less often recognised metalloids, phosphorus, gallium (in particular) and selenium have semiconductor applications. Phosphorus is used in trace amounts as a dopant for n-type semiconductors. The commercial use of gallium compounds is dominated by semiconductor applications – in integrated circuits, cell phones, laser diodes, light-emitting diodes, photodetectors, and solar cells. Selenium is used in the production of solar cells and in high-energy surge protectors.
Boron, silicon, germanium, antimony, and tellurium, as well as heavier metals and metalloids such as Sm, Hg, Tl, Pb, Bi, and Se, can be found in topological insulators. These are alloys or compounds which, at ultracold temperatures or room temperature (depending on their composition), are metallic conductors on their surfaces but insulators through their interiors. Cadmium arsenide Cd3As2, at about 1 K, is a Dirac-semimetal – a bulk electronic analogue of graphene – in which electrons travel effectively as massless particles. These two classes of material are thought to have potential quantum computing applications.
Nomenclature and history
Derivation and other names
Several names are sometimes used synonymously although some of these have other meanings that are not necessarily interchangeable: amphoteric element, boundary element, half-way element, near metal, meta-metal, semiconductor, semimetal and submetal. "Amphoteric element" is sometimes used more broadly to include transition metals capable of forming oxyanions, such as chromium and manganese. "Meta-metal" is sometimes used instead to refer to certain metals (Be, Zn, Cd, Hg, In, Tl, β-Sn, Pb) located just to the left of the metalloids on standard periodic tables. These metals tend to have distorted crystalline structures, electrical conductivity values at the lower end of those of metals, and amphoteric (weakly basic) oxides. The names amphoteric element and semiconductor are problematic as some elements referred to as metalloids do not show marked amphoteric behaviour (bismuth, for example) or semiconductivity (polonium) in their most stable forms.
Origin and usage
The origin and usage of the term metalloid is convoluted. The "Manual of Metalloids" published in 1864 divided all elements into either metals or metalloids. Earlier usage in mineralogy, to describe a mineral having a metallic appearance, can be sourced to as early as 1800. Since the mid-20th century it has been used to refer to intermediate or borderline chemical elements. The International Union of Pure and Applied Chemistry (IUPAC) previously recommended abandoning the term metalloid, and suggested using the term semimetal instead. Use of this latter term has more recently been discouraged by Atkins et al. as it has a more common meaning that refers to the electronic band structure of a substance rather than the overall classification of an element. The most recent IUPAC publications on nomenclature and terminology do not include any recommendations on the usage of the terms metalloid or semimetal.
Elements commonly recognised as metalloids
Properties noted in this section refer to the elements in their most thermodynamically stable forms under ambient conditions.
Boron
Pure boron is a shiny, silver-grey crystalline solid. It is less dense than aluminium (2.34 vs. 2.70 g/cm3), and is hard and brittle. It is barely reactive under normal conditions, except for attack by fluorine, and has a melting point of 2076 °C (cf. steel ~1370 °C). Boron is a semiconductor; its room temperature electrical conductivity is 1.5 × 10−6 S•cm−1 (about 200 times less than that of tap water) and it has a band gap of about 1.56 eV. Mendeleev commented that, "Boron appears in a free state in several forms which are intermediate between the metals and the nonmmetals."
The structural chemistry of boron is dominated by its small atomic size, and relatively high ionization energy. With only three valence electrons per boron atom, simple covalent bonding cannot fulfil the octet rule. Metallic bonding is the usual result among the heavier congenors of boron but this generally requires low ionization energies. Instead, because of its small size and high ionization energies, the basic structural unit of boron (and nearly all of its allotropes) is the icosahedral B12 cluster. Of the 36 electrons associated with 12 boron atoms, 26 reside in 13 delocalized molecular orbitals; the other 10 electrons are used to form two- and three-centre covalent bonds between icosahedra. The same motif can be seen, as are deltahedral variants or fragments, in metal borides and hydride derivatives, and in some halides.
The bonding in boron has been described as being characteristic of behaviour intermediate between metals and nonmetallic covalent network solids (such as diamond). The energy required to transform B, C, N, Si, and P from nonmetallic to metallic states has been estimated as 30, 100, 240, 33, and 50 kJ/mol, respectively. This indicates the proximity of boron to the metal-nonmetal borderline.
Most of the chemistry of boron is nonmetallic in nature. Unlike its heavier congeners, it is not known to form a simple B3+ or hydrated [B(H2O)4]3+ cation. The small size of the boron atom enables the preparation of many interstitial alloy-type borides. Analogies between boron and transition metals have been noted in the formation of complexes, and adducts (for example, BH3 + CO →BH3CO and, similarly, Fe(CO)4 + CO →Fe(CO)5), as well as in the geometric and electronic structures of cluster species such as [B6H6]2− and [Ru6(CO)18]2−. The aqueous chemistry of boron is characterised by the formation of many different polyborate anions. Given its high charge-to-size ratio, boron bonds covalently in nearly all of its compounds; the exceptions are the borides as these include, depending on their composition, covalent, ionic, and metallic bonding components. Simple binary compounds, such as boron trichloride are Lewis acids as the formation of three covalent bonds leaves a hole in the octet which can be filled by an electron-pair donated by a Lewis base. Boron has a strong affinity for oxygen and a duly extensive borate chemistry. The oxide B2O3 is polymeric in structure, weakly acidic, and a glass former. Organometallic compounds of boron have been known since the 19th century (see organoboron chemistry).
Silicon
Silicon is a crystalline solid with a blue-grey metallic lustre. Like boron, it is less dense (at 2.33 g/cm3) than aluminium, and is hard and brittle. It is a relatively unreactive element. According to Rochow, the massive crystalline form (especially if pure) is "remarkably inert to all acids, including hydrofluoric". Less pure silicon, and the powdered form, are variously susceptible to attack by strong or heated acids, as well as by steam and fluorine. Silicon dissolves in hot aqueous alkalis with the evolution of hydrogen, as do metals such as beryllium, aluminium, zinc, gallium or indium. It melts at 1414 °C. Silicon is a semiconductor with an electrical conductivity of 10−4 S•cm−1 and a band gap of about 1.11 eV. When it melts, silicon becomes a reasonable metal with an electrical conductivity of 1.0–1.3 × 104 S•cm−1, similar to that of liquid mercury.
The chemistry of silicon is generally nonmetallic (covalent) in nature. It is not known to form a cation. Silicon can form alloys with metals such as iron and copper. It shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. The high strength of the silicon–oxygen bond dominates the chemical behaviour of silicon. Polymeric silicates, built up by tetrahedral SiO4 units sharing their oxygen atoms, are the most abundant and important compounds of silicon. The polymeric borates, comprising linked trigonal and tetrahedral BO3 or BO4 units, are built on similar structural principles. The oxide SiO2 is polymeric in structure, weakly acidic, and a glass former. Traditional organometallic chemistry includes the carbon compounds of silicon (see organosilicon).
Germanium
Germanium is a shiny grey-white solid. It has a density of 5.323 g/cm3 and is hard and brittle. It is mostly unreactive at room temperature but is slowly attacked by hot concentrated sulfuric or nitric acid. Germanium also reacts with molten caustic soda to yield sodium germanate Na2GeO3 and hydrogen gas. It melts at 938 °C. Germanium is a semiconductor with an electrical conductivity of around 2 × 10−2 S•cm−1 and a band gap of 0.67 eV. Liquid germanium is a metallic conductor, with an electrical conductivity similar to that of liquid mercury.
Most of the chemistry of germanium is characteristic of a nonmetal. Whether or not germanium forms a cation is unclear, aside from the reported existence of the Ge2+ ion in a few esoteric compounds. It can form alloys with metals such as aluminium and gold. It shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Germanium generally forms tetravalent (IV) compounds, and it can also form less stable divalent (II) compounds, in which it behaves more like a metal. Germanium analogues of all of the major types of silicates have been prepared. The metallic character of germanium is also suggested by the formation of various oxoacid salts. A phosphate [(HPO4)2Ge·H2O] and highly stable trifluoroacetate Ge(OCOCF3)4 have been described, as have Ge2(SO4)2, Ge(ClO4)4 and GeH2(C2O4)3. The oxide GeO2 is polymeric, amphoteric, and a glass former. The dioxide is soluble in acidic solutions (the monoxide GeO, is even more so), and this is sometimes used to classify germanium as a metal. Up to the 1930s germanium was considered to be a poorly conducting metal; it has occasionally been classified as a metal by later writers. As with all the elements commonly recognised as metalloids, germanium has an established organometallic chemistry (see Organogermanium chemistry).
Arsenic
Arsenic is a grey, metallic looking solid. It has a density of 5.727 g/cm3 and is brittle, and moderately hard (more than aluminium; less than iron). It is stable in dry air but develops a golden bronze patina in moist air, which blackens on further exposure. Arsenic is attacked by nitric acid and concentrated sulfuric acid. It reacts with fused caustic soda to give the arsenate Na3AsO3 and hydrogen gas. Arsenic sublimes at 615 °C. The vapour is lemon-yellow and smells like garlic. Arsenic only melts under a pressure of 38.6 atm, at 817 °C. It is a semimetal with an electrical conductivity of around 3.9 × 104 S•cm−1 and a band overlap of 0.5 eV. Liquid arsenic is a semiconductor with a band gap of 0.15 eV.
The chemistry of arsenic is predominately nonmetallic. Whether or not arsenic forms a cation is unclear. Its many metal alloys are mostly brittle. It shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Arsenic generally forms compounds in which it has an oxidation state of +3 or +5. The halides, and the oxides and their derivatives are illustrative examples. In the trivalent state, arsenic shows some incipient metallic properties. The halides are hydrolysed by water but these reactions, particularly those of the chloride, are reversible with the addition of a hydrohalic acid. The oxide is acidic but, as noted below, (weakly) amphoteric. The higher, less stable, pentavalent state has strongly acidic (nonmetallic) properties. Compared to phosphorus, the stronger metallic character of arsenic is indicated by the formation of oxoacid salts such as AsPO4, As2(SO4)3 and arsenic acetate As(CH3COO)3. The oxide As2O3 is polymeric, amphoteric, and a glass former. Arsenic has an extensive organometallic chemistry (see Organoarsenic chemistry).
Antimony
Antimony is a silver-white solid with a blue tint and a brilliant lustre. It has a density of 6.697 g/cm3 and is brittle, and moderately hard (more so than arsenic; less so than iron; about the same as copper). It is stable in air and moisture at room temperature. It is attacked by concentrated nitric acid, yielding the hydrated pentoxide Sb2O5. Aqua regia gives the pentachloride SbCl5 and hot concentrated sulfuric acid results in the sulfate Sb2(SO4)3. It is not affected by molten alkali. Antimony is capable of displacing hydrogen from water, when heated: 2 Sb + 3 H2O → Sb2O3 + 3 H2. It melts at 631 °C. Antimony is a semimetal with an electrical conductivity of around 3.1 × 104 S•cm−1 and a band overlap of 0.16 eV. Liquid antimony is a metallic conductor with an electrical conductivity of around 5.3 × 104 S•cm−1.
Most of the chemistry of antimony is characteristic of a nonmetal. Antimony has some definite cationic chemistry, SbO+ and Sb(OH)2+ being present in acidic aqueous solution; the compound Sb8(GaCl4)2, which contains the homopolycation, Sb82+, was prepared in 2004. It can form alloys with one or more metals such as aluminium, iron, nickel, copper, zinc, tin, lead, and bismuth. Antimony has fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Like arsenic, antimony generally forms compounds in which it has an oxidation state of +3 or +5. The halides, and the oxides and their derivatives are illustrative examples. The +5 state is less stable than the +3, but relatively easier to attain than with arsenic. This is explained by the poor shielding afforded the arsenic nucleus by its 3d10 electrons. In comparison, the tendency of antimony (being a heavier atom) to oxidize more easily partially offsets the effect of its 4d10 shell. Tripositive antimony is amphoteric; pentapositive antimony is (predominately) acidic. Consistent with an increase in metallic character down group 15, antimony forms salts including an acetate Sb(CH3CO2)3, phosphate SbPO4, sulfate Sb2(SO4)3 and perchlorate Sb(ClO4)3. The otherwise acidic pentoxide Sb2O5 shows some basic (metallic) behaviour in that it can be dissolved in very acidic solutions, with the formation of the oxycation SbO. The oxide Sb2O3 is polymeric, amphoteric, and a glass former. Antimony has an extensive organometallic chemistry (see Organoantimony chemistry).
Tellurium
Tellurium is a silvery-white shiny solid. It has a density of 6.24 g/cm3, is brittle, and is the softest of the commonly recognised metalloids, being marginally harder than sulfur. Large pieces of tellurium are stable in air. The finely powdered form is oxidized by air in the presence of moisture. Tellurium reacts with boiling water, or when freshly precipitated even at 50 °C, to give the dioxide and hydrogen: Te + 2 H2O → TeO2 + 2 H2. It reacts (to varying degrees) with nitric, sulfuric, and hydrochloric acids to give compounds such as the sulfoxide TeSO3 or tellurous acid H2TeO3, the basic nitrate (Te2O4H)+(NO3)−, or the oxide sulfate Te2O3(SO4). It dissolves in boiling alkalis, to give the tellurite and telluride: 3 Te + 6 KOH = K2TeO3 + 2 K2Te + 3 H2O, a reaction that proceeds or is reversible with increasing or decreasing temperature.
At higher temperatures tellurium is sufficiently plastic to extrude. It melts at 449.51 °C. Crystalline tellurium has a structure consisting of parallel infinite spiral chains. The bonding between adjacent atoms in a chain is covalent, but there is evidence of a weak metallic interaction between the neighbouring atoms of different chains. Tellurium is a semiconductor with an electrical conductivity of around 1.0 S•cm−1 and a band gap of 0.32 to 0.38 eV. Liquid tellurium is a semiconductor, with an electrical conductivity, on melting, of around 1.9 × 103 S•cm−1. Superheated liquid tellurium is a metallic conductor.
Most of the chemistry of tellurium is characteristic of a nonmetal.
It shows some cationic behaviour. The dioxide dissolves in acid to yield the trihydroxotellurium(IV) Te(OH)3+ ion; the red Te42+ and yellow-orange Te62+ ions form when tellurium is oxidized in fluorosulfuric acid (HSO3F), or liquid sulfur dioxide (SO2), respectively. It can form alloys with aluminium, silver, and tin. Tellurium shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Tellurium generally forms compounds in which it has an oxidation state of −2, +4 or +6. The +4 state is the most stable. Tellurides of composition XxTey are easily formed with most other elements and represent the most common tellurium minerals. Nonstoichiometry is pervasive, especially with transition metals. Many tellurides can be regarded as metallic alloys. The increase in metallic character evident in tellurium, as compared to the lighter chalcogens, is further reflected in the reported formation of various other oxyacid salts, such as a basic selenate 2TeO2·SeO3 and an analogous perchlorate and periodate 2TeO2·HXO4. Tellurium forms a polymeric, amphoteric, glass-forming oxide TeO2. It is a "conditional" glass-forming oxide – it forms a glass with a very small amount of additive. Tellurium has an extensive organometallic chemistry (see Organotellurium chemistry).
Elements less commonly recognised as metalloids
Carbon
Carbon is ordinarily classified as a nonmetal but has some metallic properties and is occasionally classified as a metalloid. Hexagonal graphitic carbon (graphite) is the most thermodynamically stable allotrope of carbon under ambient conditions. It has a lustrous appearance and is a fairly good electrical conductor. Graphite has a layered structure. Each layer consists of carbon atoms bonded to three other carbon atoms in a hexagonal lattice arrangement. The layers are stacked together and held loosely by van der Waals forces and delocalized valence electrons.
Like a metal, the conductivity of graphite in the direction of its planes decreases as the temperature is raised; it has the electronic band structure of a semimetal. The allotropes of carbon, including graphite, can accept foreign atoms or compounds into their structures via substitution, intercalation, or doping. The resulting materials are sometimes referred to as "carbon alloys". Carbon can form ionic salts, including a hydrogen sulfate, perchlorate, and nitrate (CX−.2HX, where X = HSO4, ClO4; and CNO.3HNO3). In organic chemistry, carbon can form complex cationstermed carbocationsin which the positive charge is on the carbon atom; examples are and , and their derivatives.
Graphite is an established solid lubricant and behaves as a semiconductor in a direction perpendicular to its planes. Most of its chemistry is nonmetallic; it has a relatively high ionization energy and, compared to most metals, a relatively high electronegativity. Carbon can form anions such as C4− (methanide), C (acetylide), and C (sesquicarbide or allylenide), in compounds with metals of main groups 1–3, and with the lanthanides and actinides. Its oxide CO2 forms carbonic acid H2CO3.
Aluminium
Aluminium is ordinarily classified as a metal. It is lustrous, malleable and ductile, and has high electrical and thermal conductivity. Like most metals it has a close-packed crystalline structure, and forms a cation in aqueous solution.
It has some properties that are unusual for a metal; taken together, these are sometimes used as a basis to classify aluminium as a metalloid. Its crystalline structure shows some evidence of directional bonding. Aluminium bonds covalently in most compounds. The oxide Al2O3 is amphoteric and a conditional glass-former. Aluminium can form anionic aluminates, such behaviour being considered nonmetallic in character.
Classifying aluminium as a metalloid has been disputed given its many metallic properties. It is therefore, arguably, an exception to the mnemonic that elements adjacent to the metal–nonmetal dividing line are metalloids.
Stott labels aluminium as a weak metal. It has the physical properties of a metal but some of the chemical properties of a nonmetal. Steele notes the paradoxical chemical behaviour of aluminium: "It resembles a weak metal in its amphoteric oxide and in the covalent character of many of its compounds ... Yet it is a highly electropositive metal ... [with] a high negative electrode potential". Moody says that, "aluminium is on the 'diagonal borderland' between metals and non-metals in the chemical sense."
Selenium
Selenium shows borderline metalloid or nonmetal behaviour.
Its most stable form, the grey trigonal allotrope, is sometimes called "metallic" selenium because its electrical conductivity is several orders of magnitude greater than that of the red monoclinic form. The metallic character of selenium is further shown by its lustre, and its crystalline structure, which is thought to include weakly "metallic" interchain bonding. Selenium can be drawn into thin threads when molten and viscous. It shows reluctance to acquire "the high positive oxidation numbers characteristic of nonmetals". It can form cyclic polycations (such as Se) when dissolved in oleums (an attribute it shares with sulfur and tellurium), and a hydrolysed cationic salt in the form of trihydroxoselenium(IV) perchlorate [Se(OH)3]+·ClO.
The nonmetallic character of selenium is shown by its brittleness and the low electrical conductivity (~10−9 to 10−12 S•cm−1) of its highly purified form. This is comparable to or less than that of bromine (7.95 S•cm−1), a nonmetal. Selenium has the electronic band structure of a semiconductor and retains its semiconducting properties in liquid form. It has a relatively high electronegativity (2.55 revised Pauling scale). Its reaction chemistry is mainly that of its nonmetallic anionic forms Se2−, SeO and SeO.
Selenium is commonly described as a metalloid in the environmental chemistry literature. It moves through the aquatic environment similarly to arsenic and antimony; its water-soluble salts, in higher concentrations, have a similar toxicological profile to that of arsenic.
Polonium
Polonium is "distinctly metallic" in some ways. Both of its allotropic forms are metallic conductors. It is soluble in acids, forming the rose-coloured Po2+ cation and displacing hydrogen: Po + 2 H+ → Po2+ + H2. Many polonium salts are known. The oxide PoO2 is predominantly basic in nature. Polonium is a reluctant oxidizing agent, unlike its lightest congener oxygen: highly reducing conditions are required for the formation of the Po2− anion in aqueous solution.
Whether polonium is ductile or brittle is unclear. It is predicted to be ductile based on its calculated elastic constants. It has a simple cubic crystalline structure. Such a structure has few slip systems and "leads to very low ductility and hence low fracture resistance".
Polonium shows nonmetallic character in its halides, and by the existence of polonides. The halides have properties generally characteristic of nonmetal halides (being volatile, easily hydrolyzed, and soluble in organic solvents). Many metal polonides, obtained by heating the elements together at 500–1,000 °C, and containing the Po2− anion, are also known.
Astatine
As a halogen, astatine tends to be classified as a nonmetal. It has some marked metallic properties and is sometimes instead classified as either a metalloid or (less often) as a metal. Immediately following its production in 1940, early investigators considered it a metal. In 1949 it was called the most noble (difficult to reduce) nonmetal as well as being a relatively noble (difficult to oxidize) metal. In 1950 astatine was described as a halogen and (therefore) a reactive nonmetal. In 2013, on the basis of relativistic modelling, astatine was predicted to be a monatomic metal, with a face-centred cubic crystalline structure.
Several authors have commented on the metallic nature of some of the properties of astatine. Since iodine is a semiconductor in the direction of its planes, and since the halogens become more metallic with increasing atomic number, it has been presumed that astatine would be a metal if it could form a condensed phase. Astatine may be metallic in the liquid state on the basis that elements with an enthalpy of vaporization (∆Hvap) greater than ~42 kJ/mol are metallic when liquid. Such elements include boron, silicon, germanium, antimony, selenium, and tellurium. Estimated values for ∆Hvap of diatomic astatine are 50 kJ/mol or higher; diatomic iodine, with a ∆Hvap of 41.71, falls just short of the threshold figure.
"Like typical metals, it [astatine] is precipitated by hydrogen sulfide even from strongly acid solutions and is displaced in a free form from sulfate solutions; it is deposited on the cathode on electrolysis." Further indications of a tendency for astatine to behave like a (heavy) metal are: "... the formation of pseudohalide compounds ... complexes of astatine cations ... complex anions of trivalent astatine ... as well as complexes with a variety of organic solvents". It has also been argued that astatine demonstrates cationic behaviour, by way of stable At+ and AtO+ forms, in strongly acidic aqueous solutions.
Some of astatine's reported properties are nonmetallic. It has been extrapolated to have the narrow liquid range ordinarily associated with nonmetals (mp 302 °C; bp 337 °C), although experimental indications suggest a lower boiling point of about 230±3 °C. Batsanov gives a calculated band gap energy for astatine of 0.7 eV; this is consistent with nonmetals (in physics) having separated valence and conduction bands and thereby being either semiconductors or insulators. The chemistry of astatine in aqueous solution is mainly characterised by the formation of various anionic species. Most of its known compounds resemble those of iodine, which is a halogen and a nonmetal. Such compounds include astatides (XAt), astatates (XAtO3), and monovalent interhalogen compounds.
Restrepo et al. reported that astatine appeared to be more polonium-like than halogen-like. They did so on the basis of detailed comparative studies of the known and interpolated properties of 72 elements.
Related concepts
Near metalloids
In the periodic table, some of the elements adjacent to the commonly recognised metalloids, although usually classified as either metals or nonmetals, are occasionally referred to as near-metalloids or noted for their metalloidal character. To the left of the metal–nonmetal dividing line, such elements include gallium, tin and bismuth. They show unusual packing structures, marked covalent chemistry (molecular or polymeric), and amphoterism. To the right of the dividing line are carbon, phosphorus, selenium and iodine. They exhibit metallic lustre, semiconducting properties and bonding or valence bands with delocalized character. This applies to their most thermodynamically stable forms under ambient conditions: carbon as graphite; phosphorus as black phosphorus; and selenium as grey selenium.
Allotropes
Different crystalline forms of an element are called allotropes. Some allotropes, particularly those of elements located (in periodic table terms) alongside or near the notional dividing line between metals and nonmetals, exhibit more pronounced metallic, metalloidal or nonmetallic behaviour than others. The existence of such allotropes can complicate the classification of the elements involved.
Tin, for example, has two allotropes: tetragonal "white" β-tin and cubic "grey" α-tin. White tin is a very shiny, ductile and malleable metal. It is the stable form at or above room temperature and has an electrical conductivity of 9.17 × 104 S·cm−1 (~1/6th that of copper). Grey tin usually has the appearance of a grey micro-crystalline powder, and can also be prepared in brittle semi-lustrous crystalline or polycrystalline forms. It is the stable form below 13.2 °C and has an electrical conductivity of between (2–5) × 102 S·cm−1 (~1/250th that of white tin). Grey tin has the same crystalline structure as that of diamond. It behaves as a semiconductor (as if it had a band gap of 0.08 eV), but has the electronic band structure of a semimetal. It has been referred to as either a very poor metal, a metalloid, a nonmetal or a near metalloid.
The diamond allotrope of carbon is clearly nonmetallic, being translucent and having a low electrical conductivity of 10−14 to 10−16 S·cm−1. Graphite has an electrical conductivity of 3 × 104 S·cm−1, a figure more characteristic of a metal. Phosphorus, sulfur, arsenic, selenium, antimony, and bismuth also have less stable allotropes that display different behaviours.
Abundance, extraction, and cost
Abundance
The table gives crustal abundances of the elements commonly to rarely recognised as metalloids. Some other elements are included for comparison: oxygen and xenon (the most and least abundant elements with stable isotopes); iron and the coinage metals copper, silver, and gold; and rhenium, the least abundant stable metal (aluminium is normally the most abundant metal). Various abundance estimates have been published; these often disagree to some extent.
Extraction
The recognised metalloids can be obtained by chemical reduction of either their oxides or their sulfides. Simpler or more complex extraction methods may be employed depending on the starting form and economic factors. Boron is routinely obtained by reducing the trioxide with magnesium: B2O3 + 3 Mg → 2 B + 3MgO; after secondary processing the resulting brown powder has a purity of up to 97%. Boron of higher purity (> 99%) is prepared by heating volatile boron compounds, such as BCl3 or BBr3, either in a hydrogen atmosphere (2 BX3 + 3 H2 → 2 B + 6 HX) or to the point of thermal decomposition. Silicon and germanium are obtained from their oxides by heating the oxide with carbon or hydrogen: SiO2 + C → Si + CO2; GeO2 + 2 H2 → Ge + 2 H2O. Arsenic is isolated from its pyrite (FeAsS) or arsenical pyrite (FeAs2) by heating; alternatively, it can be obtained from its oxide by reduction with carbon: 2 As2O3 + 3 C → 2 As + 3 CO2. Antimony is derived from its sulfide by reduction with iron: Sb2S3 → 2 Sb + 3 FeS. Tellurium is prepared from its oxide by dissolving it in aqueous NaOH, yielding tellurite, then by electrolytic reduction: TeO2 + 2 NaOH → Na2TeO3 + H2O; Na2TeO3 + H2O → Te + 2 NaOH + O2. Another option is reduction of the oxide by roasting with carbon: TeO2 + C → Te + CO2.
Production methods for the elements less frequently recognised as metalloids involve natural processing, electrolytic or chemical reduction, or irradiation. Carbon (as graphite) occurs naturally and is extracted by crushing the parent rock and floating the lighter graphite to the surface. Aluminium is extracted by dissolving its oxide Al2O3 in molten cryolite Na3AlF6 and then by high temperature electrolytic reduction. Selenium is produced by roasting the coinage metal selenides X2Se (X = Cu, Ag, Au) with soda ash to give the selenite: X2Se + O2 + Na2CO3 → Na2SeO3 + 2 X + CO2; the selenide is neutralized by sulfuric acid H2SO4 to give selenous acid H2SeO3; this is reduced by bubbling with SO2 to yield elemental selenium. Polonium and astatine are produced in minute quantities by irradiating bismuth.
Cost
The recognised metalloids and their closer neighbours mostly cost less than silver; only polonium and astatine are more expensive than gold, on account of their significant radioactivity. As of 5 April 2014, prices for small samples (up to 100 g) of silicon, antimony and tellurium, and graphite, aluminium and selenium, average around one third the cost of silver (US$1.5 per gram or about $45 an ounce). Boron, germanium, and arsenic samples average about three-and-a-half times the cost of silver. Polonium is available for about $100 per microgram. Zalutsky and Pruszynski estimate a similar cost for producing astatine. Prices for the applicable elements traded as commodities tend to range from two to three times cheaper than the sample price (Ge), to nearly three thousand times cheaper (As).
Notes
References
Sources
Addison WE 1964, The Allotropy of the Elements, Oldbourne Press, London
Addison CC & Sowerby DB 1972, Main Group Elements: Groups V and VI, Butterworths, London,
Adler D 1969, 'Half-way Elements: The Technology of Metalloids', book review, Technology Review, vol. 72, no. 1, Oct/Nov, pp. 18–19,
Ahmed MAK, Fjellvåg H & Kjekshus A 2000, 'Synthesis, Structure and Thermal Stability of Tellurium Oxides and Oxide Sulfate Formed from Reactions in Refluxing Sulfuric Acid', Journal of the Chemical Society, Dalton Transactions, no. 24, pp. 4542–49,
Ahmeda E & Rucka M 2011, 'Homo- and heteroatomic polycations of groups 15 and 16. Recent advances in synthesis and isolation using room temperature ionic liquids', Coordination Chemistry Reviews, vol. 255, nos 23–24, pp. 2892–903,
Allen DS & Ordway RJ 1968, Physical Science, 2nd ed., Van Nostrand, Princeton, New Jersey,
Allen PB & Broughton JQ 1987, 'Electrical Conductivity and Electronic Properties of Liquid Silicon', Journal of Physical Chemistry, vol. 91, no. 19, pp. 4964–70,
Alloul H 2010, Introduction to the Physics of Electrons in Solids, Springer-Verlag, Berlin,
Anderson JB, Rapposch MH, Anderson CP & Kostiner E 1980, 'Crystal Structure Refinement of Basic Tellurium Nitrate: A Reformulation as (Te2O4H)+(NO3)−', Monatshefte für Chemie/ Chemical Monthly, vol. 111, no. 4, pp. 789–96,
Antman KH 2001, 'Introduction: The History of Arsenic Trioxide in Cancer Therapy', The Oncologist, vol. 6, suppl. 2, pp. 1–2,
Apseloff G 1999, 'Therapeutic Uses of Gallium Nitrate: Past, Present, and Future', American Journal of Therapeutics, vol. 6, no. 6, pp. 327–39,
Arlman EJ 1939, 'The Complex Compounds P(OH)4.ClO4 and Se(OH)3.ClO4', Recueil des Travaux Chimiques des Pays-Bas, vol. 58, no. 10, pp. 871–74,
Askeland DR, Phulé PP & Wright JW 2011, The Science and Engineering of Materials, 6th ed., Cengage Learning, Stamford, CT,
Asmussen J & Reinhard DK 2002, Diamond Films Handbook, Marcel Dekker, New York,
Atkins P, Overton T, Rourke J, Weller M & Armstrong F 2006, Shriver & Atkins' Inorganic Chemistry, 4th ed., Oxford University Press, Oxford,
Atkins P, Overton T, Rourke J, Weller M & Armstrong F 2010, Shriver & Atkins' Inorganic Chemistry, 5th ed., Oxford University Press, Oxford,
Austen K 2012, 'A Factory for Elements that Barely Exist', New Scientist, 21 Apr, p. 12
Ba LA, Döring M, Jamier V & Jacob C 2010, 'Tellurium: an Element with Great Biological Potency and Potential', Organic & Biomolecular Chemistry, vol. 8, pp. 4203–16,
Bagnall KW 1957, Chemistry of the Rare Radioelements: Polonium-actinium, Butterworths Scientific Publications, London
Bagnall KW 1966, The Chemistry of Selenium, Tellurium and Polonium, Elsevier, Amsterdam
Bagnall KW 1990, 'Compounds of Polonium', in KC Buschbeck & C Keller (eds), Gmelin Handbook of Inorganic and Organometallic Chemistry, 8th ed., Po Polonium, Supplement vol. 1, Springer-Verlag, Berlin, pp. 285–340,
Bailar JC, Moeller T & Kleinberg J 1965, University Chemistry, DC Heath, Boston
Bailar JC & Trotman-Dickenson AF 1973, Comprehensive Inorganic Chemistry, vol. 4, Pergamon, Oxford
Bailar JC, Moeller T, Kleinberg J, Guss CO, Castellion ME & Metz C 1989, Chemistry, 3rd ed., Harcourt Brace Jovanovich, San Diego,
Barfuß H, Böhnlein G, Freunek P, Hofmann R, Hohenstein H, Kreische W, Niedrig H and Reimer A 1981, 'The Electric Quadrupole Interaction of 111Cd in Arsenic Metal and in the System Sb1–xInx and Sb1–xCdx', Hyperfine Interactions, vol. 10, nos 1–4, pp. 967–72,
Barnett EdB & Wilson CL 1959, Inorganic Chemistry: A Text-book for Advanced Students, 2nd ed., Longmans, London
Barrett J 2003, Inorganic Chemistry in Aqueous Solution, The Royal Society of Chemistry, Cambridge,
Barsanov GP & Ginzburg AI 1974, 'Mineral', in AM Prokhorov (ed.), Great Soviet Encyclopedia, 3rd ed., vol. 16, Macmillan, New York, pp. 329–32
Bassett LG, Bunce SC, Carter AE, Clark HM & Hollinger HB 1966, Principles of Chemistry, Prentice-Hall, Englewood Cliffs, New Jersey
Batsanov SS 1971, 'Quantitative Characteristics of Bond Metallicity in Crystals', Journal of Structural Chemistry, vol. 12, no. 5, pp. 809–13,
Baudis U & Fichte R 2012, 'Boron and Boron Alloys', in F Ullmann (ed.), Ullmann's Encyclopedia of Industrial Chemistry, vol. 6, Wiley-VCH, Weinheim, pp. 205–17,
Becker WM, Johnson VA & Nussbaum 1971, 'The Physical Properties of Tellurium', in WC Cooper (ed.), Tellurium, Van Nostrand Reinhold, New York
Belpassi L, Tarantelli F, Sgamellotti A & Quiney HM 2006, 'The Electronic Structure of Alkali Aurides. A Four-Component Dirac−Kohn−Sham study', The Journal of Physical Chemistry A, vol. 110, no. 13, April 6, pp. 4543–54,
Berger LI 1997, Semiconductor Materials, CRC Press, Boca Raton, Florida,
Bettelheim F, Brown WH, Campbell MK & Farrell SO 2010, Introduction to General, Organic, and Biochemistry, 9th ed., Brooks/Cole, Belmont CA,
Bianco E, Butler S, Jiang S, Restrepo OD, Windl W & Goldberger JE 2013, 'Stability and Exfoliation of Germanane: A Germanium Graphane Analogue,' ACS Nano, March 19 (web),
Bodner GM & Pardue HL 1993, Chemistry, An Experimental Science, John Wiley & Sons, New York,
Bogoroditskii NP & Pasynkov VV 1967, Radio and Electronic Materials, Iliffe Books, London
Bomgardner MM 2013, 'Thin-Film Solar Firms Revamp To Stay In The Game', Chemical & Engineering News, vol. 91, no. 20, pp. 20–21,
Bond GC 2005, Metal-Catalysed Reactions of Hydrocarbons, Springer, New York,
Booth VH & Bloom ML 1972, Physical Science: A Study of Matter and Energy, Macmillan, New York
Borst KE 1982, 'Characteristic Properties of Metallic Crystals', Journal of Educational Modules for Materials Science and Engineering, vol. 4, no. 3, pp. 457–92,
Boyer RD, Li J, Ogata S & Yip S 2004, 'Analysis of Shear Deformations in Al and Cu: Empirical Potentials Versus Density Functional Theory', Modelling and Simulation in Materials Science and Engineering, vol. 12, no. 5, pp. 1017–29,
Bradbury GM, McGill MV, Smith HR & Baker PS 1957, Chemistry and You, Lyons and Carnahan, Chicago
Bradley D 2014, Resistance is Low: New Quantum Effect, spectroscopyNOW, viewed 15 December 2014-12-15
Brescia F, Arents J, Meislich H & Turk A 1980, Fundamentals of Chemistry, 4th ed., Academic Press, New York,
Brown L & Holme T 2006, Chemistry for Engineering Students, Thomson Brooks/Cole, Belmont California,
Brown WP c. 2007 'The Properties of Semi-Metals or Metalloids,' Doc Brown's Chemistry: Introduction to the Periodic Table, viewed 8 February 2013
Brown TL, LeMay HE, Bursten BE, Murphy CJ, Woodward P 2009, Chemistry: The Central Science, 11th ed., Pearson Education, Upper Saddle River, New Jersey,
Brownlee RB, Fuller RW, Hancock WJ, Sohon MD & Whitsit JE 1943, Elements of Chemistry, Allyn and Bacon, Boston
Brownlee RB, Fuller RT, Whitsit JE Hancock WJ & Sohon MD 1950, Elements of Chemistry, Allyn and Bacon, Boston
Bucat RB (ed.) 1983, Elements of Chemistry: Earth, Air, Fire & Water, vol. 1, Australian Academy of Science, Canberra,
Büchel KH (ed.) 1983, Chemistry of Pesticides, John Wiley & Sons, New York,
Büchel KH, Moretto H-H, Woditsch P 2003, Industrial Inorganic Chemistry, 2nd ed., Wiley-VCH,
Burkhart CN, Burkhart CG & Morrell DS 2011, 'Treatment of Tinea Versicolor', in HI Maibach & F Gorouhi (eds), Evidence Based Dermatology, 2nd ed., People's Medical Publishing House, Shelton, CT, pp. 365–72,
Burrows A, Holman J, Parsons A, Pilling G & Price G 2009, Chemistry3: Introducing Inorganic, Organic and Physical Chemistry, Oxford University, Oxford,
Butterman WC & Carlin JF 2004, Mineral Commodity Profiles: Antimony, US Geological Survey
Butterman WC & Jorgenson JD 2005, Mineral Commodity Profiles: Germanium, US Geological Survey
Calderazzo F, Ercoli R & Natta G 1968, 'Metal Carbonyls: Preparation, Structure, and Properties', in I Wender & P Pino (eds), Organic Syntheses via Metal Carbonyls: Volume 1, Interscience Publishers, New York, pp. 1–272
Carapella SC 1968a, 'Arsenic' in CA Hampel (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York, pp. 29–32
Carapella SC 1968, 'Antimony' in CA Hampel (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York, pp. 22–25
Carlin JF 2011, Minerals Year Book: Antimony, United States Geological Survey
Carmalt CJ & Norman NC 1998, 'Arsenic, Antimony and Bismuth: Some General Properties and Aspects of Periodicity', in NC Norman (ed.), Chemistry of Arsenic, Antimony and Bismuth, Blackie Academic & Professional, London, pp. 1–38,
Carter CB & Norton MG 2013, Ceramic Materials: Science and Engineering, 2nd ed., Springer Science+Business Media, New York,
Cegielski C 1998, Yearbook of Science and the Future, Encyclopædia Britannica, Chicago,
Chalmers B 1959, Physical Metallurgy, John Wiley & Sons, New York
Champion J, Alliot C, Renault E, Mokili BM, Chérel M, Galland N & Montavon G 2010, 'Astatine Standard Redox Potentials and Speciation in Acidic Medium', The Journal of Physical Chemistry A, vol. 114, no. 1, pp. 576–82,
Chang R 2002, Chemistry, 7th ed., McGraw Hill, Boston,
Chao MS & Stenger VA 1964, 'Some Physical Properties of Highly Purified Bromine', Talanta, vol. 11, no. 2, pp. 271–81,
Charlier J-C, Gonze X, Michenaud J-P 1994, First-principles Study of the Stacking Effect on the Electronic Properties of Graphite(s), Carbon, vol. 32, no. 2, pp. 289–99,
Chatt J 1951, 'Metal and Metalloid Compounds of the Alkyl Radicals', in EH Rodd (ed.), Chemistry of Carbon Compounds: A Modern Comprehensive Treatise, vol. 1, part A, Elsevier, Amsterdam, pp. 417–58
Chedd G 1969, Half-Way Elements: The Technology of Metalloids, Doubleday, New York
Chizhikov DM & Shchastlivyi VP 1968, Selenium and Selenides, translated from the Russian by EM Elkin, Collet's, London
Chizhikov DM & Shchastlivyi 1970, Tellurium and the Tellurides, Collet's, London
Choppin GR & Johnsen RH 1972, Introductory Chemistry, Addison-Wesley, Reading, Massachusetts
Chopra IS, Chaudhuri S, Veyan JF & Chabal YJ 2011, 'Turning Aluminium into a Noble-metal-like Catalyst for Low-temperature Activation of Molecular Hydrogen', Nature Materials, vol. 10, pp. 884–89,
Chung DDL 2010, Composite Materials: Science and Applications, 2nd ed., Springer-Verlag, London,
Clark GL 1960, The Encyclopedia of Chemistry, Reinhold, New York
Cobb C & Fetterolf ML 2005, The Joy of Chemistry, Prometheus Books, New York,
Cohen ML & Chelikowsky JR 1988, Electronic Structure and Optical Properties of Semiconductors, Springer Verlag, Berlin,
Coles BR & Caplin AD 1976, The Electronic Structures of Solids, Edward Arnold, London,
Conkling JA & Mocella C 2011, Chemistry of Pyrotechnics: Basic Principles and Theory, 2nd ed., CRC Press, Boca Raton, FL,
Considine DM & Considine GD (eds) 1984, 'Metalloid', in Van Nostrand Reinhold Encyclopedia of Chemistry, 4th ed., Van Nostrand Reinhold, New York,
Cooper DG 1968, The Periodic Table, 4th ed., Butterworths, London
Corbridge DEC 2013, Phosphorus: Chemistry, Biochemistry and Technology, 6th ed., CRC Press, Boca Raton, Florida,
Corwin CH 2005, Introductory Chemistry: Concepts & Connections, 4th ed., Prentice Hall, Upper Saddle River, New Jersey,
Cotton FA, Wilkinson G & Gaus P 1995, Basic Inorganic Chemistry, 3rd ed., John Wiley & Sons, New York,
Cotton FA, Wilkinson G, Murillo CA & Bochmann 1999, Advanced Inorganic Chemistry, 6th ed., John Wiley & Sons, New York,
Cox PA 1997, The Elements: Their Origin, Abundance and Distribution, Oxford University, Oxford,
Cox PA 2004, Inorganic Chemistry, 2nd ed., Instant Notes series, Bios Scientific, London,
Craig PJ, Eng G & Jenkins RO 2003, 'Occurrence and Pathways of Organometallic Compounds in the Environment – General Considerations' in PJ Craig (ed.), Organometallic Compounds in the Environment, 2nd ed., John Wiley & Sons, Chichester, West Sussex, pp. 1–56,
Craig PJ & Maher WA 2003, 'Organoselenium compounds in the environment', in Organometallic Compounds in the Environment, PJ Craig (ed.), John Wiley & Sons, New York, pp. 391–98,
Crow JM 2011, 'Boron Carbide Could Light Way to Less-toxic Green Pyrotechnics', Nature News, 8 April,
Cusack N 1967, The Electrical and Magnetic Properties of Solids: An Introductory Textbook, 5th ed., John Wiley & Sons, New York
Cusack N E 1987, The Physics of Structurally Disordered Matter: An Introduction, A Hilger in association with the University of Sussex Press, Bristol,
Daintith J (ed.) 2004, Oxford Dictionary of Chemistry, 5th ed., Oxford University, Oxford,
Danaith J (ed.) 2008, Oxford Dictionary of Chemistry, Oxford University Press, Oxford,
Daniel-Hoffmann M, Sredni B & Nitzan Y 2012, 'Bactericidal Activity of the Organo-Tellurium Compound AS101 Against Enterobacter Cloacae''', Journal of Antimicrobial Chemotherapy, vol. 67, no. 9, pp. 2165–72,
Daub GW & Seese WS 1996, Basic Chemistry, 7th ed., Prentice Hall, New York,
Davidson DF & Lakin HW 1973, 'Tellurium', in DA Brobst & WP Pratt (eds), United States Mineral Resources, Geological survey professional paper 820, United States Government Printing Office, Washington, pp. 627–30
Dávila ME, Molotov SL, Laubschat C & Asensio MC 2002, 'Structural Determination of Yb Single-Crystal Films Grown on W(110) Using Photoelectron Diffraction', Physical Review B, vol. 66, no. 3, p. 035411–18,
Demetriou MD, Launey ME, Garrett G, Schramm JP, Hofmann DC, Johnson WL & Ritchie RO 2011, 'A Damage-Tolerant Glass', Nature Materials, vol. 10, February, pp. 123–28,
Deming HG 1925, General Chemistry: An Elementary Survey, 2nd ed., John Wiley & Sons, New York
Denniston KJ, Topping JJ & Caret RL 2004, General, Organic, and Biochemistry, 5th ed., McGraw-Hill, New York,
Deprez N & McLachan DS 1988, 'The Analysis of the Electrical Conductivity of Graphite Conductivity of Graphite Powders During Compaction', Journal of Physics D: Applied Physics, vol. 21, no. 1,
Desai PD, James HM & Ho CY 1984, 'Electrical Resistivity of Aluminum and Manganese', Journal of Physical and Chemical Reference Data, vol. 13, no. 4, pp. 1131–72,
Desch CH 1914, Intermetallic Compounds, Longmans, Green and Co., New York
Detty MR & O'Regan MB 1994, Tellurium-Containing Heterocycles, (The Chemistry of Heterocyclic Compounds, vol. 53), John Wiley & Sons, New York
Dev N 2008, 'Modelling Selenium Fate and Transport in Great Salt Lake Wetlands', PhD dissertation, University of Utah, ProQuest, Ann Arbor, Michigan,
De Zuane J 1997, Handbook of Drinking Water Quality, 2nd ed., John Wiley & Sons, New York,
Di Pietro P 2014, Optical Properties of Bismuth-Based Topological Insulators, Springer International Publishing, Cham, Switzerland,
Divakar C, Mohan M & Singh AK 1984, 'The Kinetics of Pressure-Induced Fcc-Bcc Transformation in Ytterbium', Journal of Applied Physics, vol. 56, no. 8, pp. 2337–40,
Donohue J 1982, The Structures of the Elements, Robert E. Krieger, Malabar, Florida,
Douglade J & Mercier R 1982, 'Structure Cristalline et Covalence des Liaisons dans le Sulfate d'Arsenic(III), As2(SO4)3', Acta Crystallographica Section B, vol. 38, no. 3, pp. 720–23,
Du Y, Ouyang C, Shi S & Lei M 2010, 'Ab Initio Studies on Atomic and Electronic Structures of Black Phosphorus', Journal of Applied Physics, vol. 107, no. 9, pp. 093718–1–4,
Dunlap BD, Brodsky MB, Shenoy GK & Kalvius GM 1970, 'Hyperfine Interactions and Anisotropic Lattice Vibrations of 237Np in α-Np Metal', Physical Review B, vol. 1, no. 1, pp. 44–49,
Dunstan S 1968, Principles of Chemistry, D. Van Nostrand Company, London
Dupree R, Kirby DJ & Freyland W 1982, 'N.M.R. Study of Changes in Bonding and the Metal-Non-metal Transition in Liquid Caesium-Antimony Alloys', Philosophical Magazine Part B, vol. 46 no. 6, pp. 595–606,
Eagleson M 1994, Concise Encyclopedia Chemistry, Walter de Gruyter, Berlin,
Eason R 2007, Pulsed Laser Deposition of Thin Films: Applications-Led Growth of Functional Materials, Wiley-Interscience, New York
Ebbing DD & Gammon SD 2010, General Chemistry, 9th ed. enhanced, Brooks/Cole, Belmont, California,
Eberle SH 1985, 'Chemical Behavior and Compounds of Astatine', pp. 183–209, in Kugler & Keller
Edwards PP & Sienko MJ 1983, 'On the Occurrence of Metallic Character in the Periodic Table of the Elements', Journal of Chemical Education, vol. 60, no. 9, pp. 691–96,
Edwards PP 1999, 'Chemically Engineering the Metallic, Insulating and Superconducting State of Matter' in KR Seddon & M Zaworotko (eds), Crystal Engineering: The Design and Application of Functional Solids, Kluwer Academic, Dordrecht, pp. 409–31,
Edwards PP 2000, 'What, Why and When is a metal?', in N Hall (ed.), The New Chemistry, Cambridge University, Cambridge, pp. 85–114,
Edwards PP, Lodge MTJ, Hensel F & Redmer R 2010, '... A Metal Conducts and a Non-metal Doesn't', Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 368, pp. 941–65,
Eggins BR 1972, Chemical Structure and Reactivity, MacMillan, London,
Eichler R, Aksenov NV, Belozerov AV, Bozhikov GA, Chepigin VI, Dmitriev SN, Dressler R, Gäggeler HW, Gorshkov VA, Haenssler F, Itkis MG, Laube A, Lebedev VY, Malyshev ON, Oganessian YT, Petrushkin OV, Piguet D, Rasmussen P, Shishkin SV, Shutov, AV, Svirikhin AI, Tereshatov EE, Vostokin GK, Wegrzecki M & Yeremin AV 2007, 'Chemical Characterization of Element 112,' Nature, vol. 447, pp. 72–75,
Ellern H 1968, Military and Civilian Pyrotechnics, Chemical Publishing Company, New York
Emeléus HJ & Sharpe AG 1959, Advances in Inorganic Chemistry and Radiochemistry, vol. 1, Academic Press, New York
Emsley J 1971, The Inorganic Chemistry of the Non-metals, Methuen Educational, London,
Emsley J 2001, Nature's Building Blocks: An A–Z guide to the Elements, Oxford University Press, Oxford,
Eranna G 2011, Metal Oxide Nanostructures as Gas Sensing Devices, Taylor & Francis, Boca Raton, Florida,
Evans KA 1993, 'Properties and Uses of Oxides and Hydroxides,' in AJ Downs (ed.), Chemistry of Aluminium, Gallium, Indium, and Thallium, Blackie Academic & Professional, Bishopbriggs, Glasgow, pp. 248–91,
Evans RC 1966, An Introduction to Crystal Chemistry, Cambridge University, Cambridge
Everest DA 1953, 'The Chemistry of Bivalent Germanium Compounds. Part IV. Formation of Germanous Salts by Reduction with Hydrophosphorous Acid.' Journal of the Chemical Society, pp. 4117–20,
EVM (Expert Group on Vitamins and Minerals) 2003, Safe Upper Levels for Vitamins and Minerals, UK Food Standards Agency, London,
Farandos NM, Yetisen AK, Monteiro MJ, Lowe CR & Yun SH 2014, 'Contact Lens Sensors in Ocular Diagnostics', Advanced Healthcare Materials, , viewed 23 November 2014
Fehlner TP 1992, 'Introduction', in TP Fehlner (ed.), Inorganometallic chemistry, Plenum, New York, pp. 1–6,
Fehlner TP 1990, 'The Metallic Face of Boron,' in AG Sykes (ed.), Advances in Inorganic Chemistry, vol. 35, Academic Press, Orlando, pp. 199–233
Feng & Jin 2005, Introduction to Condensed Matter Physics: Volume 1, World Scientific, Singapore,
Fernelius WC 1982, 'Polonium', Journal of Chemical Education, vol. 59, no. 9, pp. 741–42,
Ferro R & Saccone A 2008, Intermetallic Chemistry, Elsevier, Oxford, p. 233,
Fesquet AA 1872, A Practical Guide for the Manufacture of Metallic Alloys, trans. A. Guettier, Henry Carey Baird, Philadelphia
Fine LW & Beall H 1990, Chemistry for Engineers and Scientists, Saunders College Publishing, Philadelphia,
Fokwa BPT 2014, 'Borides: Solid-state Chemistry', in Encyclopedia of Inorganic and Bioinorganic Chemistry, John Wiley and Sons,
Foster W 1936, The Romance of Chemistry, D Appleton-Century, New York
Foster LS & Wrigley AN 1958, 'Periodic Table', in GL Clark, GG Hawley & WA Hamor (eds), The Encyclopedia of Chemistry (Supplement), Reinhold, New York, pp. 215–20
Friend JN 1953, Man and the Chemical Elements, 1st ed., Charles Scribner's Sons, New York
Fritz JS & Gjerde DT 2008, Ion Chromatography, John Wiley & Sons, New York,
Gary S 2013, 'Poisoned Alloy' the Metal of the Future', News in science, viewed 28 August 2013
Geckeler S 1987, Optical Fiber Transmission Systems, Artech Hous, Norwood, Massachusetts,
German Energy Society 2008, Planning and Installing Photovoltaic Systems: A Guide for Installers, Architects and Engineers, 2nd ed., Earthscan, London,
Gordh G, Gordh G & Headrick D 2003, A Dictionary of Entomology, CABI Publishing, Wallingford,
Gillespie RJ 1998, 'Covalent and Ionic Molecules: Why are BeF2 and AlF3 High Melting Point Solids Whereas BF3 and SiF4 are Gases?', Journal of Chemical Education, vol. 75, no. 7, pp. 923–25,
Gillespie RJ & Robinson EA 1963, 'The Sulphuric Acid Solvent System. Part IV. Sulphato Compounds of Arsenic (III)', Canadian Journal of Chemistry, vol. 41, no. 2, pp. 450–58
Gillespie RJ & Passmore J 1972, 'Polyatomic Cations', Chemistry in Britain, vol. 8, pp. 475–79
Gladyshev VP & Kovaleva SV 1998, 'Liquidus Shape of the Mercury–Gallium System', Russian Journal of Inorganic Chemistry, vol. 43, no. 9, pp. 1445–46
Glazov VM, Chizhevskaya SN & Glagoleva NN 1969, Liquid Semiconductors, Plenum, New York
Glinka N 1965, General Chemistry, trans. D Sobolev, Gordon & Breach, New York
Glockling F 1969, The Chemistry of Germanium, Academic, London
Glorieux B, Saboungi ML & Enderby JE 2001, 'Electronic Conduction in Liquid Boron', Europhysics Letters (EPL), vol. 56, no. 1, pp. 81–85,
Goldsmith RH 1982, 'Metalloids', Journal of Chemical Education, vol. 59, no. 6, pp. 526–27,
Good JM, Gregory O & Bosworth N 1813, 'Arsenicum', in Pantologia: A New Cyclopedia ... of Essays, Treatises, and Systems ... with a General Dictionary of Arts, Sciences, and Words ... , Kearsely, London
Goodrich BG 1844, A Glance at the Physical Sciences, Bradbury, Soden & Co., Boston
Gray T 2009, The Elements: A Visual Exploration of Every Known Atom in the Universe, Black Dog & Leventhal, New York,
Gray T 2010, 'Metalloids (7)', viewed 8 February 2013
Gray T, Whitby M & Mann N 2011, Mohs Hardness of the Elements, viewed 12 Feb 2012
Greaves GN, Knights JC & Davis EA 1974, 'Electronic Properties of Amorphous Arsenic', in J Stuke & W Brenig (eds), Amorphous and Liquid Semiconductors: Proceedings, vol. 1, Taylor & Francis, London, pp. 369–74,
Greenwood NN 2001, 'Main Group Element Chemistry at the Millennium', Journal of the Chemical Society, Dalton Transactions, issue 14, pp. 2055–66,
Greenwood NN & Earnshaw A 2002, Chemistry of the Elements, 2nd ed., Butterworth-Heinemann,
Guan PF, Fujita T, Hirata A, Liu YH & Chen MW 2012, 'Structural Origins of the Excellent Glass-forming Ability of Pd40Ni40P20', Physical Review Letters, vol. 108, no. 17, pp. 175501–1–5,
Gunn G (ed.) 2014, Critical Metals Handbook,John Wiley & Sons, Chichester, West Sussex,
Gupta VB, Mukherjee AK & Cameotra SS 1997, 'Poly(ethylene Terephthalate) Fibres', in MN Gupta & VK Kothari (eds), Manufactured Fibre Technology, Springer Science+Business Media, Dordrecht, pp. 271–317,
Haaland A, Helgaker TU, Ruud K & Shorokhov DJ 2000, 'Should Gaseous BF3 and SiF4 be Described as Ionic Compounds?', Journal of Chemical Education, vol. 77, no.8, pp. 1076–80,
Hager T 2006, The Demon under the Microscope, Three Rivers Press, New York,
Hai H, Jun H, Yong-Mei L, He-Yong H, Yong C & Kang-Nian F 2012, 'Graphite Oxide as an Efficient and Durable Metal-free Catalyst for Aerobic Oxidative Coupling of Amines to Imines', Green Chemistry, vol. 14, pp. 930–34,
Haiduc I & Zuckerman JJ 1985, Basic Organometallic Chemistry, Walter de Gruyter, Berlin,
Haissinsky M & Coche A 1949, 'New Experiments on the Cathodic Deposition of Radio-elements', Journal of the Chemical Society, pp. S397–400
Manson SS & Halford GR 2006, Fatigue and Durability of Structural Materials, ASM International, Materials Park, OH,
Haller EE 2006, 'Germanium: From its Discovery to SiGe Devices', Materials Science in Semiconductor Processing, vol. 9, nos 4–5, , viewed 8 February 2013
Hamm DI 1969, Fundamental Concepts of Chemistry, Meredith Corporation, New York,
Hampel CA & Hawley GG 1966, The Encyclopedia of Chemistry, 3rd ed., Van Nostrand Reinhold, New York
Hampel CA (ed.) 1968, The Encyclopedia of the Chemical Elements, Reinhold, New York
Hampel CA & Hawley GG 1976, Glossary of Chemical Terms, Van Nostrand Reinhold, New York,
Harding C, Johnson DA & Janes R 2002, Elements of the p Block, Royal Society of Chemistry, Cambridge,
Hasan H 2009, The Boron Elements: Boron, Aluminum, Gallium, Indium, Thallium, The Rosen Publishing Group, New York,
Hatcher WH 1949, An Introduction to Chemical Science, John Wiley & Sons, New York
Hawkes SJ 1999, 'Polonium and Astatine are not Semimetals', Chem 13 News, February, p. 14,
Hawkes SJ 2001, 'Semimetallicity', Journal of Chemical Education, vol. 78, no. 12, pp. 1686–87,
Hawkes SJ 2010, 'Polonium and Astatine are not Semimetals', Journal of Chemical Education, vol. 87, no. 8, p. 783,
Haynes WM (ed.) 2012, CRC Handbook of Chemistry and Physics, 93rd ed., CRC Press, Boca Raton, Florida,
He M, Kravchyk K, Walter M & Kovalenko MV 2014, 'Monodisperse Antimony Nanocrystals for High-Rate Li-ion and Na-ion Battery Anodes: Nano versus Bulk', Nano Letters, vol. 14, no. 3, pp. 1255–62,
Henderson M 2000, Main Group Chemistry, The Royal Society of Chemistry, Cambridge,
Hermann A, Hoffmann R & Ashcroft NW 2013, 'Condensed Astatine: Monatomic and Metallic', Physical Review Letters, vol. 111, pp. 11604–1−11604-5,
Hérold A 2006, 'An Arrangement of the Chemical Elements in Several Classes Inside the Periodic Table According to their Common Properties', Comptes Rendus Chimie, vol. 9, no. 1, pp. 148–53,
Herzfeld K 1927, 'On Atomic Properties Which Make an Element a Metal', Physical Review, vol. 29, no. 5, pp. 701–05,
Hill G & Holman J 2000, Chemistry in Context, 5th ed., Nelson Thornes, Cheltenham,
Hiller LA & Herber RH 1960, Principles of Chemistry, McGraw-Hill, New York
Hindman JC 1968, 'Neptunium', in CA Hampel (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York, pp. 432–37
Hoddeson L 2007, 'In the Wake of Thomas Kuhn's Theory of Scientific Revolutions: The Perspective of an Historian of Science,' in S Vosniadou, A Baltas & X Vamvakoussi (eds), Reframing the Conceptual Change Approach in Learning and Instruction, Elsevier, Amsterdam, pp. 25–34,
Holderness A & Berry M 1979, Advanced Level Inorganic Chemistry, 3rd ed., Heinemann Educational Books, London,
Holt, Rinehart & Wilson c. 2007 'Why Polonium and Astatine are not Metalloids in HRW texts', viewed 8 February 2013
Hopkins BS & Bailar JC 1956, General Chemistry for Colleges, 5th ed., D. C. Heath, Boston
Horvath 1973, 'Critical Temperature of Elements and the Periodic System', Journal of Chemical Education, vol. 50, no. 5, pp. 335–36,
Hosseini P, Wright CD & Bhaskaran H 2014, 'An optoelectronic framework enabled by low-dimensional phase-change films,' Nature, vol. 511, pp. 206–11,
Houghton RP 1979, Metal Complexes in Organic Chemistry, Cambridge University Press, Cambridge,
House JE 2008, Inorganic Chemistry, Academic Press (Elsevier), Burlington, Massachusetts,
House JE & House KA 2010, Descriptive Inorganic Chemistry, 2nd ed., Academic Press, Burlington, Massachusetts,
Housecroft CE & Sharpe AG 2008, Inorganic Chemistry, 3rd ed., Pearson Education, Harlow,
Hultgren HH 1966, 'Metalloids', in GL Clark & GG Hawley (eds), The Encyclopedia of Inorganic Chemistry, 2nd ed., Reinhold Publishing, New York
Hunt A 2000, The Complete A-Z Chemistry Handbook, 2nd ed., Hodder & Stoughton, London,
Inagaki M 2000, New Carbons: Control of Structure and Functions, Elsevier, Oxford,
IUPAC 1959, Nomenclature of Inorganic Chemistry, 1st ed., Butterworths, London
IUPAC 1971, Nomenclature of Inorganic Chemistry, 2nd ed., Butterworths, London,
IUPAC 2005, Nomenclature of Inorganic Chemistry (the "Red Book"), NG Connelly & T Damhus eds, RSC Publishing, Cambridge,
IUPAC 2006–, Compendium of Chemical Terminology (the "Gold Book"), 2nd ed., by M Nic, J Jirat & B Kosata, with updates compiled by A Jenkins, ,
James M, Stokes R, Ng W & Moloney J 2000, Chemical Connections 2: VCE Chemistry Units 3 & 4, John Wiley & Sons, Milton, Queensland,
Jaouen G & Gibaud S 2010, 'Arsenic-based Drugs: From Fowler's solution to Modern Anticancer Chemotherapy', Medicinal Organometallic Chemistry, vol. 32, pp. 1–20,
Jaskula BW 2013, Mineral Commodity Profiles: Gallium, US Geological Survey
Jenkins GM & Kawamura K 1976, Polymeric Carbons – Carbon Fibre, Glass and Char, Cambridge University Press, Cambridge,
Jezequel G & Thomas J 1997, 'Experimental Band Structure of Semimetal Bismuth', Physical Review B, vol. 56, no. 11, pp. 6620–26,
Johansen G & Mackintosh AR 1970, 'Electronic Structure and Phase Transitions in Ytterbium', Solid State Communications, vol. 8, no. 2, pp. 121–24
Jolly WL & Latimer WM 1951, 'The Heat of Oxidation of Germanous Iodide and the Germanium Oxidation Potentials', University of California Radiation Laboratory, Berkeley
Jolly WL 1966, The Chemistry of the Non-metals, Prentice-Hall, Englewood Cliffs, New Jersey
Jones BW 2010, Pluto: Sentinel of the Outer Solar System, Cambridge University, Cambridge,
Kaminow IP & Li T 2002 (eds), Optical Fiber Telecommunications, Volume IVA, Academic Press, San Diego,
Karabulut M, Melnik E, Stefan R, Marasinghe GK, Ray CS, Kurkjian CR & Day DE 2001, 'Mechanical and Structural Properties of Phosphate Glasses', Journal of Non-Crystalline Solids, vol. 288, nos. 1–3, pp. 8–17,
Kauthale SS, Tekali SU, Rode AB, Shinde SV, Ameta KL & Pawar RP 2015, 'Silica Sulfuric Acid: A Simple and Powerful Heterogenous Catalyst in Organic Synthesis', in KL Ameta & A Penoni, Heterogeneous Catalysis: A Versatile Tool for the Synthesis of Bioactive Heterocycles, CRC Press, Boca Raton, Florida, pp. 133–62,
Kaye GWC & Laby TH 1973, Tables of Physical and Chemical Constants, 14th ed., Longman, London,
Keall JHH, Martin NH & Tunbridge RE 1946, 'A Report of Three Cases of Accidental Poisoning by Sodium Tellurite', British Journal of Industrial Medicine, vol. 3, no. 3, pp. 175–76
Keevil D 1989, 'Aluminium', in MN Patten (ed.), Information Sources in Metallic Materials, Bowker–Saur, London, pp. 103–19,
Keller C 1985, 'Preface', in Kugler & Keller
Kelter P, Mosher M & Scott A 2009, Chemistry: the Practical Science, Houghton Mifflin, Boston,
Kennedy T, Mullane E, Geaney H, Osiak M, O'Dwyer C & Ryan KM 2014, 'High-Performance Germanium Nanowire-Based Lithium-Ion Battery Anodes Extending over 1000 Cycles Through in Situ Formation of a Continuous Porous Network', Nano-letters, vol. 14, no. 2, pp. 716–23,
Kent W 1950, Kent's Mechanical Engineers' Handbook, 12th ed., vol. 1, John Wiley & Sons, New York
King EL 1979, Chemistry, Painter Hopkins, Sausalito, California,
King RB 1994, 'Antimony: Inorganic Chemistry', in RB King (ed), Encyclopedia of Inorganic Chemistry, John Wiley, Chichester, pp. 170–75,
King RB 2004, 'The Metallurgist's Periodic Table and the Zintl-Klemm Concept', in DH Rouvray & RB King (eds), The Periodic Table: Into the 21st Century, Research Studies Press, Baldock, Hertfordshire, pp. 191–206,
Kinjo R, Donnadieu B, Celik MA, Frenking G & Bertrand G 2011, 'Synthesis and Characterization of a Neutral Tricoordinate Organoboron Isoelectronic with Amines', Science, pp. 610–13,
Kitaĭgorodskiĭ AI 1961, Organic Chemical Crystallography, Consultants Bureau, New York
Kleinberg J, Argersinger WJ & Griswold E 1960, Inorganic Chemistry, DC Health, Boston
Klement W, Willens RH & Duwez P 1960, 'Non-Crystalline Structure in Solidified Gold–Silicon Alloys', Nature, vol. 187, pp. 869–70,
Klemm W 1950, 'Einige Probleme aus der Physik und der Chemie der Halbmetalle und der Metametalle', Angewandte Chemie, vol. 62, no. 6, pp. 133–42
Klug HP & Brasted RC 1958, Comprehensive Inorganic Chemistry: The Elements and Compounds of Group IV A, Van Nostrand, New York
Kneen WR, Rogers MJW & Simpson P 1972, Chemistry: Facts, Patterns, and Principles, Addison-Wesley, London,
Kohl AL & Nielsen R 1997, Gas Purification, 5th ed., Gulf Valley Publishing, Houston, Texas,
Kolobov AV & Tominaga J 2012, Chalcogenides: Metastability and Phase Change Phenomena, Springer-Verlag, Heidelberg,
Kolthoff IM & Elving PJ 1978, Treatise on Analytical Chemistry. Analytical Chemistry of Inorganic and Organic Compounds: Antimony, Arsenic, Boron, Carbon, Molybenum, Tungsten, Wiley Interscience, New York,
Kondrat'ev SN & Mel'nikova SI 1978, 'Preparation and Various Characteristics of Boron Hydrogen Sulfates', Russian Journal of Inorganic Chemistry, vol. 23, no. 6, pp. 805–07
Kopp JG, Lipták BG & Eren H 000, 'Magnetic Flowmeters', in BG Lipták (ed.), Instrument Engineers' Handbook, 4th ed., vol. 1, Process Measurement and Analysis, CRC Press, Boca Raton, Florida, pp. 208–24,
Korenman IM 1959, 'Regularities in Properties of Thallium', Journal of General Chemistry of the USSR, English translation, Consultants Bureau, New York, vol. 29, no. 2, pp. 1366–90,
Kosanke KL, Kosanke BJ & Dujay RC 2002, 'Pyrotechnic Particle Morphologies—Metal Fuels', in Selected Pyrotechnic Publications of K.L. and B.J. Kosanke Part 5 (1998 through 2000), Journal of Pyrotechnics, Whitewater, CO,
Kotz JC, Treichel P & Weaver GC 2009, Chemistry and Chemical Reactivity, 7th ed., Brooks/Cole, Belmont, California,
Kozyrev PT 1959, 'Deoxidized Selenium and the Dependence of its Electrical Conductivity on Pressure. II', Physics of the Solid State, translation of the journal Solid State Physics (Fizika tverdogo tela) of the Academy of Sciences of the USSR, vol. 1, pp. 102–10
Kraig RE, Roundy D & Cohen ML 2004, 'A Study of the Mechanical and Structural Properties of Polonium', Solid State Communications, vol. 129, issue 6, Feb, pp. 411–13,
Krannich LK & Watkins CL 2006, 'Arsenic: Organoarsenic chemistry,' Encyclopedia of inorganic chemistry, viewed 12 Feb 2012
Kreith F & Goswami DY (eds) 2005, The CRC Handbook of Mechanical Engineering, 2nd ed., Boca Raton, Florida,
Krishnan S, Ansell S, Felten J, Volin K & Price D 1998, 'Structure of Liquid Boron', Physical Review Letters, vol. 81, no. 3, pp. 586–89,
Kross B 2011, 'What's the melting point of steel?', Questions and Answers, Thomas Jefferson National Accelerator Facility, Newport News, VA
Kudryavtsev AA 1974, The Chemistry & Technology of Selenium and Tellurium, translated from the 2nd Russian edition and revised by EM Elkin, Collet's, London,
Kugler HK & Keller C (eds) 1985, Gmelin Handbook of Inorganic and Organometallic chemistry, 8th ed., 'At, Astatine', system no. 8a, Springer-Verlag, Berlin,
Ladd M 1999, Crystal Structures: Lattices and Solids in Stereoview, Horwood Publishing, Chichester,
Le Bras M, Wilkie CA & Bourbigot S (eds) 2005, Fire Retardancy of Polymers: New Applications of Mineral Fillers, Royal Society of Chemistry, Cambridge,
Lee J, Lee EK, Joo W, Jang Y, Kim B, Lim JY, Choi S, Ahn SJ, Ahn JR, Park M, Yang C, Choi BL, Hwang S & Whang D 2014, 'Wafer-Scale Growth of Single-Crystal Monolayer Graphene on Reusable Hydrogen-Terminated Germanium', Science, vol. 344, no. 6181, pp. 286–89,
Legit D, Friák M & Šob M 2010, 'Phase Stability, Elasticity, and Theoretical Strength of Polonium from First Principles,' Physical Review B, vol. 81, pp. 214118–1–19,
Lehto Y & Hou X 2011, Chemistry and Analysis of Radionuclides: Laboratory Techniques and Methodology, Wiley-VCH, Weinheim,
Lewis RJ 1993, Hawley's Condensed Chemical Dictionary, 12th ed., Van Nostrand Reinhold, New York,
Li XP 1990, 'Properties of Liquid Arsenic: A Theoretical Study', Physical Review B, vol. 41, no. 12, pp. 8392–406,
Lide DR (ed.) 2005, 'Section 14, Geophysics, Astronomy, and Acoustics; Abundance of Elements in the Earth's Crust and in the Sea', in CRC Handbook of Chemistry and Physics, 85th ed., CRC Press, Boca Raton, FL, pp. 14–17,
Lidin RA 1996, Inorganic Substances Handbook, Begell House, New York,
Lindsjö M, Fischer A & Kloo L 2004, 'Sb8(GaCl4)2: Isolation of a Homopolyatomic Antimony Cation', Angewandte Chemie, vol. 116, no. 19, pp. 2594–97,
Lipscomb CA 1972 Pyrotechnics in the '70's A Materials Approach, Naval Ammunition Depot, Research and Development Department, Crane, IN
Lister MW 1965, Oxyacids, Oldbourne Press, London
Liu ZK, Jiang J, Zhou B, Wang ZJ, Zhang Y, Weng HM, Prabhakaran D, Mo S-K, Peng H, Dudin P, Kim T, Hoesch M, Fang Z, Dai X, Shen ZX, Feng DL, Hussain Z & Chen YL 2014, 'A Stable Three-dimensional Topological Dirac Semimetal Cd3As2', Nature Materials, vol. 13, pp. 677–81,
Locke EG, Baechler RH, Beglinger E, Bruce HD, Drow JT, Johnson KG, Laughnan DG, Paul BH, Rietz RC, Saeman JF & Tarkow H 1956, 'Wood', in RE Kirk & DF Othmer (eds), Encyclopedia of Chemical Technology, vol. 15, The Interscience Encyclopedia, New York, pp. 72–102
Löffler JF, Kündig AA & Dalla Torre FH 2007, 'Rapid Solidification and Bulk Metallic Glasses—Processing and Properties,' in JR Groza, JF Shackelford, EJ Lavernia EJ & MT Powers (eds), Materials Processing Handbook, CRC Press, Boca Raton, Florida, pp. 17–1–44,
Long GG & Hentz FC 1986, Problem Exercises for General Chemistry, 3rd ed., John Wiley & Sons, New York,
Lovett DR 1977, Semimetals & Narrow-Bandgap Semi-conductors, Pion, London,
Lutz J, Schlangenotto H, Scheuermann U, De Doncker R 2011, Semiconductor Power Devices: Physics, Characteristics, Reliability, Springer-Verlag, Berlin,
Masters GM & Ela W 2008, Introduction to Environmental Engineering and Science, 3rd ed., Prentice Hall, Upper Saddle River, New Jersey,
MacKay KM, MacKay RA & Henderson W 2002, Introduction to Modern Inorganic Chemistry, 6th ed., Nelson Thornes, Cheltenham,
MacKenzie D, 2015 'Gas! Gas! Gas!', New Scientist, vol. 228, no. 3044, pp. 34–37
Madelung O 2004, Semiconductors: Data Handbook, 3rd ed., Springer-Verlag, Berlin,
Maeder T 2013, 'Review of Bi2O3 Based Glasses for Electronics and Related Applications, International Materials Reviews, vol. 58, no. 1, pp. 3‒40,
Mahan BH 1965, University Chemistry, Addison-Wesley, Reading, Massachusetts
Mainiero C,2014, 'Picatinny chemist wins Young Scientist Award for work on smoke grenades', U.S. Army, Picatinny Public Affairs, 2 April, viewed 9 June 2017
Manahan SE 2001, Fundamentals of Environmental Chemistry, 2nd ed., CRC Press, Boca Raton, Florida,
Mann JB, Meek TL & Allen LC 2000, 'Configuration Energies of the Main Group Elements', Journal of the American Chemical Society, vol. 122, no. 12, pp. 2780–83,
Marezio M & Licci F 2000, 'Strategies for Tailoring New Superconducting Systems', in X Obradors, F Sandiumenge & J Fontcuberta (eds), Applied Superconductivity 1999: Large scale applications, volume 1 of Applied Superconductivity 1999: Proceedings of EUCAS 1999, the Fourth European Conference on Applied Superconductivity, held in Sitges, Spain, 14–17 September 1999, Institute of Physics, Bristol, pp. 11–16,
Marković N, Christiansen C & Goldman AM 1998, 'Thickness-Magnetic Field Phase Diagram at the Superconductor-Insulator Transition in 2D', Physical Review Letters, vol. 81, no. 23, pp. 5217–20,
Massey AG 2000, Main Group Chemistry, 2nd ed., John Wiley & Sons, Chichester,
Masterton WL & Slowinski EJ 1977, Chemical Principles, 4th ed., W. B. Saunders, Philadelphia,
Matula RA 1979, 'Electrical Resistivity of Copper, Gold, Palladium, and Silver,' Journal of Physical and Chemical Reference Data, vol. 8, no. 4, pp. 1147–298,
McKee DW 1984, 'Tellurium – An Unusual Carbon Oxidation Catalyst', Carbon, vol. 22, no. 6, , pp. 513–16
McMurray J & Fay RC 2009, General Chemistry: Atoms First, Prentice Hall, Upper Saddle River, New Jersey,
McQuarrie DA & Rock PA 1987, General Chemistry, 3rd ed., WH Freeman, New York,
Mellor JW 1964, A Comprehensive Treatise on Inorganic and Theoretical Chemistry, vol. 9, John Wiley, New York
Mellor JW 1964a, A Comprehensive Treatise on Inorganic and Theoretical Chemistry, vol. 11, John Wiley, New York
Mendeléeff DI 1897, The Principles of Chemistry, vol. 2, 5th ed., trans. G Kamensky, AJ Greenaway (ed.), Longmans, Green & Co., London
Meskers CEM, Hagelüken C & Van Damme G 2009, 'Green Recycling of EEE: Special and Precious Metal EEE', in SM Howard, P Anyalebechi & L Zhang (eds), Proceedings of Sessions and Symposia Sponsored by the Extraction and Processing Division (EPD) of The Minerals, Metals and Materials Society (TMS), held during the TMS 2009 Annual Meeting & Exhibition San Francisco, California, February 15–19, 2009, The Minerals, Metals and Materials Society, Warrendale, Pennsylvania, , pp. 1131–36
Metcalfe HC, Williams JE & Castka JF 1974, Modern Chemistry, Holt, Rinehart and Winston, New York,
Meyer JS, Adams WJ, Brix KV, Luoma SM, Mount DR, Stubblefield WA & Wood CM (eds) 2005, Toxicity of Dietborne Metals to Aquatic Organisms, Proceedings from the Pellston Workshop on Toxicity of Dietborne Metals to Aquatic Organisms, 27 July–1 August 2002, Fairmont Hot Springs, British Columbia, Canada, Society of Environmental Toxicology and Chemistry, Pensacola, Florida,
Mhiaoui S, Sar F, Gasser J 2003, 'Influence of the History of a Melt on the Electrical Resistivity of Cadmium–Antimony Liquid Alloys', Intermetallics, vol. 11, nos 11–12, pp. 1377–82,
Miller GJ, Lee C & Choe W 2002, 'Structure and Bonding Around the Zintl border', in G Meyer, D Naumann & L Wesermann (eds), Inorganic chemistry highlights, Wiley-VCH, Weinheim, pp. 21–53,
Millot F, Rifflet JC, Sarou-Kanian V & Wille G 2002, 'High-Temperature Properties of Liquid Boron from Contactless Techniques', International Journal of Thermophysics, vol. 23, no. 5, pp. 1185–95,
Mingos DMP 1998, Essential Trends in Inorganic Chemistry, Oxford University, Oxford,
Moeller T 1954, Inorganic Chemistry: An Advanced Textbook, John Wiley & Sons, New York
Mokhatab S & Poe WA 2012, Handbook of Natural Gas Transmission and Processing, 2nd ed., Elsevier, Kidlington, Oxford,
Molina-Quiroz RC, Muñoz-Villagrán CM, de la Torre E, Tantaleán JC, Vásquez CC & Pérez-Donoso JM 2012, 'Enhancing the Antibiotic Antibacterial Effect by Sub Lethal Tellurite Concentrations: Tellurite and Cefotaxime Act Synergistically in Escherichia Coli', PloS (Public Library of Science) ONE, vol. 7, no. 4,
Monconduit L, Evain M, Boucher F, Brec R & Rouxel J 1992, 'Short Te ... Te Bonding Contacts in a New Layered Ternary Telluride: Synthesis and crystal structure of 2D Nb3GexTe6 (x ≃ 0.9)', Zeitschrift für Anorganische und Allgemeine Chemie, vol. 616, no. 10, pp. 177–82,
Moody B 1991, Comparative Inorganic Chemistry, 3rd ed., Edward Arnold, London,
Moore LJ, Fassett JD, Travis JC, Lucatorto TB & Clark CW 1985, 'Resonance-Ionization Mass Spectrometry of Carbon', Journal of the Optical Society of America B, vol. 2, no. 9, pp. 1561–65,
Moore JE 2010, 'The Birth of Topological Insulators,' Nature, vol. 464, pp. 194–98,
Moore JE 2011, Topological insulators, IEEE Spectrum, viewed 15 December 2014
Moore JT 2011, Chemistry for Dummies, 2nd ed., John Wiley & Sons, New York,
Moore NC 2014, '45-year Physics Mystery Shows a Path to Quantum Transistors', Michigan News, viewed 17 December 2014
Morgan WC 1906, Qualitative Analysis as a Laboratory Basis for the Study of General Inorganic Chemistry, The Macmillan Company, New York
Morita A 1986, 'Semiconducting Black Phosphorus', Journal of Applied Physics A, vol. 39, no. 4, pp. 227–42,
Moss TS 1952, Photoconductivity in the Elements, London, Butterworths
Muncke J 2013, 'Antimony Migration from PET: New Study Investigates Extent of Antimony Migration from Polyethylene Terephthalate (PET) Using EU Migration Testing Rules', Food Packaging Forum, April 2
Murray JF 1928, 'Cable-Sheath Corrosion', Electrical World, vol. 92, Dec 29, pp. 1295–97,
Nagao T, Sadowski1 JT, Saito M, Yaginuma S, Fujikawa Y, Kogure T, Ohno T, Hasegawa Y, Hasegawa S & Sakurai T 2004, 'Nanofilm Allotrope and Phase Transformation of Ultrathin Bi Film on Si(111)-7×7', Physical Review Letters, vol. 93, no. 10, pp. 105501–1–4,
Neuburger MC 1936, 'Gitterkonstanten für das Jahr 1936' (in German), Zeitschrift für Kristallographie, vol. 93, pp. 1–36,
Nickless G 1968, Inorganic Sulphur Chemistry, Elsevier, Amsterdam
Nielsen FH 1998, 'Ultratrace Elements in Nutrition: Current Knowledge and Speculation', The Journal of Trace Elements in Experimental Medicine, vol. 11, pp. 251–74,
NIST (National Institute of Standards and Technology) 2010, Ground Levels and Ionization Energies for Neutral Atoms, by WC Martin, A Musgrove, S Kotochigova & JE Sansonetti, viewed 8 February 2013
National Research Council 1984, The Competitive Status of the U.S. Electronics Industry: A Study of the Influences of Technology in Determining International Industrial Competitive Advantage, National Academy Press, Washington, DC, New Scientist 1975, 'Chemistry on the Islands of Stability', 11 Sep, p. 574,
New Scientist 2014, 'Colour-changing metal to yield thin, flexible displays', vol. 223, no. 2977
Oderberg DS 2007, Real Essentialism, Routledge, New York, Oxford English Dictionary 1989, 2nd ed., Oxford University, Oxford,
Oganov AR, Chen J, Gatti C, Ma Y, Ma Y, Glass CW, Liu Z, Yu T, Kurakevych OO & Solozhenko VL 2009, 'Ionic High-Pressure Form of Elemental Boron', Nature, vol. 457, 12 Feb, pp. 863–68,
Oganov AR 2010, 'Boron Under Pressure: Phase Diagram and Novel High Pressure Phase,' in N Ortovoskaya N & L Mykola L (eds), Boron Rich Solids: Sensors, Ultra High Temperature Ceramics, Thermoelectrics, Armor, Springer, Dordrecht, pp. 207–25,
Ogata S, Li J & Yip S 2002, 'Ideal Pure Shear Strength of Aluminium and Copper', Science, vol. 298, no. 5594, 25 October, pp. 807–10,
O'Hare D 1997, 'Inorganic intercalation compounds' in DW Bruce & D O'Hare (eds), Inorganic materials, 2nd ed., John Wiley & Sons, Chichester, pp. 171–254,
Okajima Y & Shomoji M 1972, Viscosity of Dilute Amalgams', Transactions of the Japan Institute of Metals, vol. 13, no. 4, pp. 255–58,
Oldfield JE, Allaway WH, HA Laitinen, HW Lakin & OH Muth 1974, 'Tellurium', in Geochemistry and the Environment, Volume 1: The Relation of Selected Trace Elements to Health and Disease, US National Committee for Geochemistry, Subcommittee on the Geochemical Environment in Relation to Health and Disease, National Academy of Sciences, Washington,
Oliwenstein L 2011, 'Caltech-Led Team Creates Damage-Tolerant Metallic Glass', California Institute of Technology, 12 January, viewed 8 February 2013
Olmsted J & Williams GM 1997, Chemistry, the Molecular Science, 2nd ed., Wm C Brown, Dubuque, Iowa,
Ordnance Office 1863, The Ordnance Manual for the use of the Officers of the Confederate States Army, 1st ed., Evans & Cogswell, Charleston, SC
Orton JW 2004, The Story of Semiconductors, Oxford University, Oxford,
Owen SM & Brooker AT 1991, A Guide to Modern Inorganic Chemistry, Longman Scientific & Technical, Harlow, Essex,
Oxtoby DW, Gillis HP & Campion A 2008, Principles of Modern Chemistry, 6th ed., Thomson Brooks/Cole, Belmont, California,
Pan K, Fu Y & Huang T 1964, 'Polarographic Behavior of Germanium(II)-Perchlorate in Perchloric Acid Solutions', Journal of the Chinese Chemical Society, pp. 176–84,
Parise JB, Tan K, Norby P, Ko Y & Cahill C 1996, 'Examples of Hydrothermal Titration and Real Time X-ray Diffraction in the Synthesis of Open Frameworks', MRS Proceedings, vol. 453, pp. 103–14,
Parish RV 1977, The Metallic Elements, Longman, London,
Parkes GD & Mellor JW 1943, Mellor's Nodern Inorganic Chemistry, Longmans, Green and Co., London
Parry RW, Steiner LE, Tellefsen RL & Dietz PM 1970, Chemistry: Experimental Foundations, Prentice-Hall/Martin Educational, Sydney,
Partington 1944, A Text-book of Inorganic Chemistry, 5th ed., Macmillan, London
Pashaey BP & Seleznev VV 1973, 'Magnetic Susceptibility of Gallium-Indium Alloys in Liquid State', Russian Physics Journal, vol. 16, no. 4, pp. 565–66,
Patel MR 2012, Introduction to Electrical Power and Power Electronics CRC Press, Boca Raton,
Paul RC, Puri JK, Sharma RD & Malhotra KC 1971, 'Unusual Cations of Arsenic', Inorganic and Nuclear Chemistry Letters, vol. 7, no. 8, pp. 725–28,
Pauling L 1988, General Chemistry, Dover Publications, New York,
Pearson WB 1972, The Crystal Chemistry and Physics of Metals and Alloys, Wiley-Interscience, New York,
Perry DL 2011, Handbook of Inorganic Compounds, 2nd ed., CRC Press, Boca Raton, Florida,
Peryea FJ 1998, 'Historical Use of Lead Arsenate Insecticides, Resulting Soil Contamination and Implications for Soil Remediation, Proceedings', 16th World Congress of Soil Science, Montpellier, France, 20–26 August
Phillips CSG & Williams RJP 1965, Inorganic Chemistry, I: Principles and Non-metals, Clarendon Press, Oxford
Pinkerton J 1800, Petralogy. A Treatise on Rocks, vol. 2, White, Cochrane, and Co., London
Poojary DM, Borade RB & Clearfield A 1993, 'Structural Characterization of Silicon Orthophosphate', Inorganica Chimica Acta, vol. 208, no. 1, pp. 23–29,
Pourbaix M 1974, Atlas of Electrochemical Equilibria in Aqueous Solutions, 2nd English edition, National Association of Corrosion Engineers, Houston,
Powell HM & Brewer FM 1938, 'The Structure of Germanous Iodide', Journal of the Chemical Society,, pp. 197–198,
Powell P 1988, Principles of Organometallic Chemistry, Chapman and Hall, London,
Prakash GKS & Schleyer PvR (eds) 1997, Stable Carbocation Chemistry, John Wiley & Sons, New York,
Prudenziati M 1977, IV. 'Characterization of Localized States in β-Rhombohedral Boron', in VI Matkovich (ed.), Boron and Refractory Borides, Springer-Verlag, Berlin, pp. 241–61,
Puddephatt RJ & Monaghan PK 1989, The Periodic Table of the Elements, 2nd ed., Oxford University, Oxford,
Pyykkö P 2012, 'Relativistic Effects in Chemistry: More Common Than You Thought', Annual Review of Physical Chemistry, vol. 63, pp. 45‒64 (56),
Rao CNR & Ganguly P 1986, 'A New Criterion for the Metallicity of Elements', Solid State Communications, vol. 57, no. 1, pp. 5–6,
Rao KY 2002, Structural Chemistry of Glasses, Elsevier, Oxford,
Rausch MD 1960, 'Cyclopentadienyl Compounds of Metals and Metalloids', Journal of Chemical Education, vol. 37, no. 11, pp. 568–78,
Rayner-Canham G & Overton T 2006, Descriptive Inorganic Chemistry, 4th ed., WH Freeman, New York,
Rayner-Canham G 2011, 'Isodiagonality in the Periodic Table', Foundations of chemistry, vol. 13, no. 2, pp. 121–29,
Reardon M 2005, 'IBM Doubles Speed of Germanium chips', CNET News, August 4, viewed 27 December 2013
Regnault MV 1853, Elements of Chemistry, vol. 1, 2nd ed., Clark & Hesser, Philadelphia
Reilly C 2002, Metal Contamination of Food, Blackwell Science, Oxford,
Reilly 2004, The Nutritional Trace Metals, Blackwell, Oxford,
Restrepo G, Mesa H, Llanos EJ & Villaveces JL 2004, 'Topological Study of the Periodic System', Journal of Chemical Information and Modelling, vol. 44, no. 1, pp. 68–75,
Restrepo G, Llanos EJ & Mesa H 2006, 'Topological Space of the Chemical Elements and its Properties', Journal of Mathematical Chemistry, vol. 39, no. 2, pp. 401–16,
Řezanka T & Sigler K 2008, 'Biologically Active Compounds of Semi-Metals', Studies in Natural Products Chemistry, vol. 35, pp. 585–606,
Richens DT 1997, The Chemistry of Aqua Ions, John Wiley & Sons, Chichester,
Rochow EG 1957, The Chemistry of Organometallic Compounds, John Wiley & Sons, New York
Rochow EG 1966, The Metalloids, DC Heath and Company, Boston
Rochow EG 1973, 'Silicon', in JC Bailar, HJ Emeléus, R Nyholm & AF Trotman-Dickenson (eds), Comprehensive Inorganic Chemistry, vol. 1, Pergamon, Oxford, pp. 1323–1467,
Rochow EG 1977, Modern Descriptive Chemistry, Saunders, Philadelphia,
Rodgers G 2011, Descriptive Inorganic, Coordination, & Solid-state Chemistry, Brooks/Cole, Belmont, CA,
Roher GS 2001, Structure and Bonding in Crystalline Materials, Cambridge University Press, Cambridge,
Rossler K 1985, 'Handling of Astatine', pp. 140–56, in Kugler & Keller
Rothenberg GB 1976, Glass Technology, Recent Developments, Noyes Data Corporation, Park Ridge, New Jersey,
Roza G 2009, Bromine, Rosen Publishing, New York,
Rupar PA, Staroverov VN & Baines KM 2008, 'A Cryptand-Encapsulated Germanium(II) Dication', Science, vol. 322, no. 5906, pp. 1360–63,
Russell AM & Lee KL 2005, Structure-Property Relations in Nonferrous Metals, Wiley-Interscience, New York,
Russell MS 2009, The Chemistry of Fireworks, 2nd ed., Royal Society of Chemistry,
Sacks MD 1998, 'Mullitization Behavior of Alpha Alumina Silica Microcomposite Powders', in AP Tomsia & AM Glaeser (eds), Ceramic Microstructures: Control at the Atomic Level, proceedings of the International Materials Symposium on Ceramic Microstructures '96: Control at the Atomic Level, June 24–27, 1996, Berkeley, CA, Plenum Press, New York, pp. 285–302,
Salentine CG 1987, 'Synthesis, Characterization, and Crystal Structure of a New Potassium Borate, KB3O5•3H2O', Inorganic Chemistry, vol. 26, no. 1, pp. 128–32,
Samsonov GV 1968, Handbook of the Physiochemical Properties of the Elements, I F I/Plenum, New York
Savvatimskiy AI 2005, 'Measurements of the Melting Point of Graphite and the Properties of Liquid Carbon (a review for 1963–2003)', Carbon, vol. 43, no. 6, pp. 1115–42,
Savvatimskiy AI 2009, 'Experimental Electrical Resistivity of Liquid Carbon in the Temperature Range from 4800 to ~20,000 K', Carbon, vol. 47, no. 10, pp. 2322–8,
Schaefer JC 1968, 'Boron' in CA Hampel (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York, pp. 73–81
Schauss AG 1991, 'Nephrotoxicity and Neurotoxicity in Humans from Organogermanium Compounds and Germanium Dioxide', Biological Trace Element Research, vol. 29, no. 3, pp. 267–80,
Schmidbaur H & Schier A 2008, 'A Briefing on Aurophilicity,' Chemical Society Reviews, vol. 37, pp. 1931–51,
Schroers J 2013, 'Bulk Metallic Glasses', Physics Today, vol. 66, no. 2, pp. 32–37,
Schwab GM & Gerlach J 1967, 'The Reaction of Germanium with Molybdenum(VI) Oxide in the Solid State' (in German), Zeitschrift für Physikalische Chemie, vol. 56, pp. 121–32,
Schwartz MM 2002, Encyclopedia of Materials, Parts, and Finishes, 2nd ed., CRC Press, Boca Raton, Florida,
Schwietzer GK and Pesterfield LL 2010, The Aqueous Chemistry of the Elements, Oxford University, Oxford, ScienceDaily 2012, 'Recharge Your Cell Phone With a Touch? New nanotechnology converts body heat into power', February 22, viewed 13 January 2013
Scott EC & Kanda FA 1962, The Nature of Atoms and Molecules: A General Chemistry, Harper & Row, New York
Secrist JH & Powers WH 1966, General Chemistry, D. Van Nostrand, Princeton, New Jersey
Segal BG 1989, Chemistry: Experiment and Theory, 2nd ed., John Wiley & Sons, New York,
Sekhon BS 2012, 'Metalloid Compounds as Drugs', Research in Pharmaceutical Sciences, vol. 8, no. 3, pp. 145–58,
Sequeira CAC 2011, 'Copper and Copper Alloys', in R Winston Revie (ed.), Uhlig's Corrosion Handbook, 3rd ed., John Wiley & Sons, Hoboken, New Jersey, pp. 757–86,
Sharp DWA 1981, 'Metalloids', in Miall's Dictionary of Chemistry, 5th ed, Longman, Harlow,
Sharp DWA 1983, The Penguin Dictionary of Chemistry, 2nd ed., Harmondsworth, Middlesex,
Shelby JE 2005, Introduction to Glass Science and Technology, 2nd ed., Royal Society of Chemistry, Cambridge,
Sidgwick NV 1950, The Chemical Elements and Their Compounds, vol. 1, Clarendon, Oxford
Siebring BR 1967, Chemistry, MacMillan, New York
Siekierski S & Burgess J 2002, Concise Chemistry of the Elements, Horwood, Chichester,
Silberberg MS 2006, Chemistry: The Molecular Nature of Matter and Change, 4th ed., McGraw-Hill, New York,
Simple Memory Art c. 2005, Periodic Table, EVA vinyl shower curtain, San Francisco
Skinner GRB, Hartley CE, Millar D & Bishop E 1979, 'Possible Treatment for Cold Sores,' British Medical Journal, vol 2, no. 6192, p. 704,
Slade S 2006, Elements and the Periodic Table, The Rosen Publishing Group, New York,
Science Learning Hub 2009, 'The Essential Elements', The University of Waikato , viewed 16 January 2013
Smith DW 1990, Inorganic Substances: A Prelude to the Study of Descriptive Inorganic Chemistry, Cambridge University, Cambridge,
Smith R 1994, Conquering Chemistry, 2nd ed., McGraw-Hill, Sydney,
Smith AH, Marshall G, Yuan Y, Steinmaus C, Liaw J, Smith MT, Wood L, Heirich M, Fritzemeier RM, Pegram MD & Ferreccio C 2014, 'Rapid Reduction in Breast Cancer Mortality with Inorganic Arsenic in Drinking Water', "EBioMedicine,"
Sneader W 2005, Drug Discovery: A History, John Wiley & Sons, New York,
Snyder MK 1966, Chemistry: Structure and Reactions, Holt, Rinehart and Winston, New York
Soverna S 2004, 'Indication for a Gaseous Element 112', in U Grundinger (ed.), GSI Scientific Report 2003, GSI Report 2004–1, p. 187,
Steele D 1966, The Chemistry of the Metallic Elements, Pergamon Press, Oxford
Stein L 1985, 'New Evidence that Radon is a Metalloid Element: Ion-Exchange Reactions of Cationic Radon', Journal of the Chemical Society, Chemical Communications, vol. 22, pp. 1631–32,
Stein L 1987, 'Chemical Properties of Radon' in PK Hopke (ed.) 1987, Radon and its Decay products: Occurrence, Properties, and Health Effects, American Chemical Society, Washington DC, pp. 240–51,
Steudel R 1977, Chemistry of the Non-metals: With an Introduction to atomic Structure and Chemical Bonding, Walter de Gruyter, Berlin,
Steurer W 2007, 'Crystal Structures of the Elements' in JW Marin (ed.), Concise Encyclopedia of the Structure of Materials, Elsevier, Oxford, pp. 127–45,
Stevens SD & Klarner A 1990, Deadly Doses: A Writer's Guide to Poisons, Writer's Digest Books, Cincinnati, Ohio,
Stoker HS 2010, General, Organic, and Biological Chemistry, 5th ed., Brooks/Cole, Cengage Learning, Belmont California,
Stott RW 1956, A Companion to Physical and Inorganic Chemistry, Longmans, Green and Co., London
Stuke J 1974, 'Optical and Electrical Properties of Selenium', in RA Zingaro & WC Cooper (eds), Selenium, Van Nostrand Reinhold, New York, pp. 174–297,
Swalin RA 1962, Thermodynamics of Solids, John Wiley & Sons, New York
Swift EH & Schaefer WP 1962, Qualitative Elemental Analysis, WH Freeman, San Francisco
Swink LN & Carpenter GB 1966, 'The Crystal Structure of Basic Tellurium Nitrate, Te2O4•HNO3', Acta Crystallographica, vol. 21, no. 4, pp. 578–83,
Szpunar J, Bouyssiere B & Lobinski R 2004, 'Advances in Analytical Methods for Speciation of Trace Elements in the Environment', in AV Hirner & H Emons (eds), Organic Metal and Metalloid Species in the Environment: Analysis, Distribution Processes and Toxicological Evaluation, Springer-Verlag, Berlin, pp. 17–40,
Taguena-Martinez J, Barrio RA & Chambouleyron I 1991, 'Study of Tin in Amorphous Germanium', in JA Blackman & J Tagüeña (eds), Disorder in Condensed Matter Physics: A Volume in Honour of Roger Elliott, Clarendon Press, Oxford, , pp. 139–44
Taniguchi M, Suga S, Seki M, Sakamoto H, Kanzaki H, Akahama Y, Endo S, Terada S & Narita S 1984, 'Core-Exciton Induced Resonant Photoemission in the Covalent Semiconductor Black Phosphorus', Solid State Communications, vo1. 49, no. 9, pp. 867–70
Tao SH & Bolger PM 1997, 'Hazard Assessment of Germanium Supplements', Regulatory Toxicology and Pharmacology, vol. 25, no. 3, pp. 211–19,
Taylor MD 1960, First Principles of Chemistry, D. Van Nostrand, Princeton, New Jersey
Thayer JS 1977, 'Teaching Bio-Organometal Chemistry. I. The Metalloids', Journal of Chemical Education, vol. 54, no. 10, pp. 604–06, The Economist 2012, 'Phase-Change Memory: Altered States', Technology Quarterly, September 1
The American Heritage Science Dictionary 2005, Houghton Mifflin Harcourt, Boston, The Chemical News 1897, 'Notices of Books: A Manual of Chemistry, Theoretical and Practical, by WA Tilden', vol. 75, no. 1951, p. 189
Thomas S & Visakh PM 2012, Handbook of Engineering and Speciality Thermoplastics: Volume 3: Polyethers and Polyesters, John Wiley & Sons, Hoboken, New Jersey,
Tilden WA 1876, Introduction to the Study of Chemical Philosophy, D. Appleton and Co., New York
Timm JA 1944, General Chemistry, McGraw-Hill, New York
Tyler Miller G 1987, Chemistry: A Basic Introduction, 4th ed., Wadsworth Publishing Company, Belmont, California,
Togaya M 2000, 'Electrical Resistivity of Liquid Carbon at High Pressure', in MH Manghnani, W Nellis & MF.Nicol (eds), Science and Technology of High Pressure, proceedings of AIRAPT-17, Honolulu, Hawaii, 25–30 July 1999, vol. 2, Universities Press, Hyderabad, pp. 871–74,
Tom LWC, Elden LM & Marsh RR 2004, 'Topical antifungals', in PS Roland & JA Rutka, Ototoxicity, BC Decker, Hamilton, Ontario, pp. 134–39,
Tominaga J 2006, 'Application of Ge–Sb–Te Glasses for Ultrahigh Density Optical Storage', in AV Kolobov (ed.), Photo-Induced Metastability in Amorphous Semiconductors, Wiley-VCH, pp. 327–27,
Toy AD 1975, The Chemistry of Phosphorus, Pergamon, Oxford,
Träger F 2007, Springer Handbook of Lasers and Optics, Springer, New York,
Traynham JG 1989, 'Carbonium Ion: Waxing and Waning of a Name', Journal of Chemical Education, vol. 63, no. 11, pp. 930–33,
Trivedi Y, Yung E & Katz DS 2013, 'Imaging in Fever of Unknown Origin', in BA Cunha (ed.), Fever of Unknown Origin, Informa Healthcare USA, New York, pp. 209–28,
Turner M 2011, 'German E. Coli Outbreak Caused by Previously Unknown Strain', Nature News, 2 Jun,
Turova N 2011, Inorganic Chemistry in Tables, Springer, Heidelberg,
Tuthill G 2011, 'Faculty profile: Elements of Great Teaching', The Iolani School Bulletin, Winter, viewed 29 October 2011
Tyler PM 1948, From the Ground Up: Facts and Figures of the Mineral Industries of the United States, McGraw-Hill, New York
UCR Today 2011, 'Research Performed in Guy Bertrand's Lab Offers Vast Family of New Catalysts for use in Drug Discovery, Biotechnology', University of California, Riverside, July 28
Uden PC 2005, 'Speciation of Selenium,' in R Cornelis, J Caruso, H Crews & K Heumann (eds), Handbook of Elemental Speciation II: Species in the Environment, Food, Medicine and Occupational Health, John Wiley & Sons, Chichester, pp. 346–65,
United Nuclear Scientific 2014, 'Disk Sources, Standard', viewed 5 April 2014
US Bureau of Naval Personnel 1965, Shipfitter 3 & 2, US Government Printing Office, Washington
US Environmental Protection Agency 1988, Ambient Aquatic Life Water Quality Criteria for Antimony (III), draft, Office of Research and Development, Environmental Research Laboratories, Washington
University of Limerick 2014, 'Researchers make breakthrough in battery technology,' 7 February, viewed 2 March 2014
University of Utah 2014, New 'Topological Insulator' Could Lead to Superfast Computers, Phys.org, viewed 15 December 2014
Van Muylder J & Pourbaix M 1974, 'Arsenic', in M Pourbaix (ed.), Atlas of Electrochemical Equilibria in Aqueous Solutions, 2nd ed., National Association of Corrosion Engineers, Houston
Van der Put PJ 1998, The Inorganic Chemistry of Materials: How to Make Things Out of Elements, Plenum, New York,
Van Setten MJ, Uijttewaal MA, de Wijs GA & Groot RA 2007, 'Thermodynamic Stability of Boron: The Role of Defects and Zero Point Motion', Journal of the American Chemical Society, vol. 129, no. 9, pp. 2458–65,
Vasáros L & Berei K 1985, 'General Properties of Astatine', pp. 107–28, in Kugler & Keller
Vernon RE 2013, 'Which Elements Are Metalloids?', Journal of Chemical Education, vol. 90, no. 12, pp. 1703–07,
Walker P & Tarn WH 1996, CRC Handbook of Metal Etchants, Boca Raton, FL,
Walters D 1982, Chemistry, Franklin Watts Science World series, Franklin Watts, London,
Wang Y & Robinson GH 2011, 'Building a Lewis Base with Boron', Science, vol. 333, no. 6042, pp. 530–31,
Wanga WH, Dongb C & Shek CH 2004, 'Bulk Metallic Glasses', Materials Science and Engineering Reports, vol. 44, nos 2–3, pp. 45–89,
Warren J & Geballe T 1981, 'Research Opportunities in New Energy-Related Materials', Materials Science and Engineering, vol. 50, no. 2, pp. 149–98,
Weingart GW 1947, Pyrotechnics, 2nd ed., Chemical Publishing Company, New York
Wells AF 1984, Structural Inorganic Chemistry, 5th ed., Clarendon, Oxford,
Whitten KW, Davis RE, Peck LM & Stanley GG 2007, Chemistry, 8th ed., Thomson Brooks/Cole, Belmont, California,
Wiberg N 2001, Inorganic Chemistry, Academic Press, San Diego,
Wilkie CA & Morgan AB 2009, Fire Retardancy of Polymeric Materials, CRC Press, Boca Raton, Florida,
Witt AF & Gatos HC 1968, 'Germanium', in CA Hampel (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York, pp. 237–44
Wogan T 2014, "First experimental evidence of a boron fullerene", Chemistry World, 14 July
Woodward WE 1948, Engineering Metallurgy, Constable, London
WPI-AIM (World Premier Institute – Advanced Institute for Materials Research) 2012, 'Bulk Metallic Glasses: An Unexpected Hybrid', AIMResearch, Tohoku University, Sendai, Japan, 30 April
Wulfsberg G 2000, Inorganic Chemistry, University Science Books, Sausalito California,
Xu Y, Miotkowski I, Liu C, Tian J, Nam H, Alidoust N, Hu J, Shih C-K, Hasan M & Chen YP 2014, 'Observation of Topological Surface State Quantum Hall Effect in an Intrinsic Three-dimensional Topological Insulator,' Nature Physics, vol, 10, pp. 956–63,
Yacobi BG & Holt DB 1990, Cathodoluminescence Microscopy of Inorganic Solids, Plenum, New York,
Yang K, Setyawan W, Wang S, Nardelli MB & Curtarolo S 2012, 'A Search Model for Topological Insulators with High-throughput Robustness Descriptors,' Nature Materials, vol. 11, pp. 614–19,
Yasuda E, Inagaki M, Kaneko K, Endo M, Oya A & Tanabe Y 2003, Carbon Alloys: Novel Concepts to Develop Carbon Science and Technology, Elsevier Science, Oxford, pp. 3–11 et seq,
Yetter RA 2012, Nanoengineered Reactive Materials and their Combustion and Synthesis, course notes, Princeton-CEFRC Summer School On Combustion, June 25–29, 2012, Penn State University
Young RV & Sessine S (eds) 2000, World of Chemistry, Gale Group, Farmington Hills, Michigan,
Young TF, Finley K, Adams WF, Besser J, Hopkins WD, Jolley D, McNaughton E, Presser TS, Shaw DP & Unrine J 2010, 'What You Need to Know About Selenium', in PM Chapman, WJ Adams, M Brooks, CJ Delos, SN Luoma, WA Maher, H Ohlendorf, TS Presser & P Shaw (eds), Ecological Assessment of Selenium in the Aquatic Environment, CRC, Boca Raton, Florida, pp. 7–45,
Zalutsky MR & Pruszynski M 2011, 'Astatine-211: Production and Availability', Current Radiopharmaceuticals, vol. 4, no. 3, pp. 177–85,
Zhang GX 2002, 'Dissolution and Structures of Silicon Surface', in MJ Deen, D Misra & J Ruzyllo (eds), Integrated Optoelectronics: Proceedings of the First International Symposium, Philadelphia, PA, The Electrochemical Society, Pennington, NJ, pp. 63–78,
Zhang TC, Lai KCK & Surampalli AY 2008, 'Pesticides', in A Bhandari, RY Surampalli, CD Adams, P Champagne, SK Ong, RD Tyagi & TC Zhang (eds), Contaminants of Emerging Environmental Concern, American Society of Civil Engineers, Reston, Virginia, , pp. 343–415
Zhdanov GS 1965, Crystal Physics, translated from the Russian publication of 1961 by AF Brown (ed.), Oliver & Boyd, Edinburgh
Zingaro RA 1994, 'Arsenic: Inorganic Chemistry', in RB King (ed.) 1994, Encyclopedia of Inorganic Chemistry, John Wiley & Sons, Chichester, pp. 192–218,
Further reading
Brady JE, Humiston GE & Heikkinen H (1980), "Chemistry of the Representative Elements: Part II, The Metalloids and Nonmetals", in General Chemistry: Principles and Structure, 2nd ed., SI version, John Wiley & Sons, New York, pp. 537–91,
Chedd G (1969), Half-way Elements: The Technology of Metalloids, Doubleday, New York
Choppin GR & Johnsen RH (1972), "Group IV and the Metalloids", in Introductory Chemistry, Addison-Wesley, Reading, Massachusetts, pp. 341–57
Dunstan S (1968), "The Metalloids", in Principles of Chemistry, D. Van Nostrand Company, London, pp. 407–39
Goldsmith RH (1982), "Metalloids", Journal of Chemical Education, vol. 59, no. 6, pp. 526527,
Hawkes SJ (2001), "Semimetallicity", Journal of Chemical Education, vol. 78, no. 12, pp. 1686–87,
Metcalfe HC, Williams JE & Castka JF (1974), "Aluminum and the Metalloids", in Modern Chemistry, Holt, Rinehart and Winston, New York, pp. 538–57,
Miller JS (2019), "Viewpoint: Metalloids – An Electronic Band Structure Perspective", Chemistry – A European Perspective, preprint version,
Moeller T, Bailar JC, Kleinberg J, Guss CO, Castellion ME & Metz C (1989), "Carbon and the Semiconducting Elements", in Chemistry, with Inorganic Qualitative Analysis, 3rd ed., Harcourt Brace Jovanovich, San Diego, pp. 742–75,
Parveen N et al. (2020), "Metalloids in plants: A systematic discussion beyond description", Annals of Applied Biology,
Rieske M (1998), "Metalloids", in Encyclopedia of Earth and Physical Sciences, Marshall Cavendish, New York, vol. 6, pp. 758–59, (set)
Rochow EG (1966), The Metalloids, DC Heath and Company, Boston
Vernon RE (2013), "Which Elements are Metalloids?", Journal of Chemical Education, vol. 90, no. 12, pp. 1703–07,
—— (2020,) "Organising the Metals and Nonmetals", Foundations of Chemistry,'' (open access)
Chemical physics
Condensed matter physics
Periodic table | Metalloid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 32,215 | [
"Periodic table",
"Matter",
"Metals",
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Chemical physics"
] |
85,620 | https://en.wikipedia.org/wiki/Generation%20ship | A generation ship, generation starship or world ship, is a hypothetical type of interstellar ark starship that travels at sub-light speed. Since such a ship might require hundreds to thousands of years to reach nearby stars, the original occupants of a generation ship would grow old and die, leaving their descendants to continue traveling.
Origins
Rocket pioneer Robert H. Goddard was the first to write about long-duration interstellar journeys in his "The Ultimate Migration" (1918). In this he described the death of the Sun and the necessity of an "interstellar ark". The crew would travel for centuries in suspended animation and be awakened when they reached another star system. He proposed to use small moons or asteroids as ships, and speculated that the crew would endure psychological and genetic changes over the generations.
Konstantin Tsiolkovsky, considered a father of astronautic theory, first described the need for multiple generations of passengers in his essay, "The Future of Earth and Mankind" (1928), outlining a space colony equipped with engines that travels thousands of years which he called "Noah's Ark". In the story, the crew had changed so much over the generations at so many levels that they did not even acknowledge Earth as their home planet.
Another early description of a generation ship is in the 1929 essay "The World, The Flesh, & The Devil" by John Desmond Bernal. Bernal's essay was the first publication to reach the public and influence other writers. He wrote about the concept of human evolution and mankind's future in space through methods of living that we now describe as a generation starship, and which could be seen in the generic word "globes".
Definition
According to Hein et al., a "generation ship" is a spacecraft on which a crew is living on-board for at least several decades, such that it comprises multiple generations. Several sub-categories of generation ships are distinguished: sprinter, slow boat, colony ship, world ship.
The Enzmann starship is categorised as "slow boat" because of the Astronomy Magazine title "Slow Boat to Centauri" (1977). Gregory Matloff's concept is called a "colony ship" and Alan Bond called his concept a "world ship". These definitions are essentially based on the velocity of the ship and population size.
Obstacles
Biosphere
Such a ship would have to be entirely self-sustaining, providing life support for everyone aboard. It must have extraordinarily reliable systems that could be maintained by the ship's inhabitants over long periods of time. This would require testing whether thousands of humans could survive on their own before sending them beyond the reach of help. Small artificial closed ecosystems, such as Biosphere 2, have been built in an attempt to examine the engineering challenges of such a system, with mixed results.
Biology and society
Generation ships would have to anticipate possible biological, social and morale problems, and would also need to deal with matters of self-worth and purpose for the various crews involved.
Estimates of the minimum reasonable population for a generation ship vary. Anthropologist John Moore has estimated that, without genetic testing of people before boarding the ship, social control and / or social engineering (such as requiring people to wait until their thirties to have children), nor cryopreservation of eggs, sperm, or embryos (as is done in sperm banks), a minimum of 160 people boarding the ship would allow normal family life (with the average individual having ten potential marriage partners) throughout a 200-year space journey, with little loss of genetic diversity. If the people who board the ship are couples, presumably in their early twenties, and everybody who lives in the ship is required to wait until their mid to late thirties before having children, then the minimum would be just 80 people. However, many variables are not accounted for in the estimate, including the higher chance of health problems for both the woman who is pregnant and the fetus or baby because of the pregnant woman's age. In 2013, anthropologist Cameron Smith reviewed existing literature and created a new computer model to estimate a minimum reasonable population in the tens of thousands. Smith's numbers were much larger than previous estimates such as Moore's, in part because Smith takes the risk of accidents and disease into consideration, and assumes at least one severe population catastrophe over the course of a 150-year journey.
In light of the multiple generations that it could take to reach even our nearest neighboring star systems such as Proxima Centauri, further issues on the viability of such interstellar arks include:
the possibility of humans dramatically evolving in directions unacceptable to the sponsors
the minimum population required to maintain in isolation a culture acceptable to the sponsors; this could include such aspects as
ability to learn scientific and technical skills needed to maintain, operate and pilot the ship
ability to accomplish the purpose (planetary colonization, research, building new interstellar arks) contemplated
sharing the values of the sponsors, which may not be likely to be empirically demonstrated to be viable beyond the home planet unless, once the ship is away from Earth and on its way, survival of one's offspring until the ship reaches the target star is one motivation.
Size
For a spacecraft to maintain a stable environment for multiple generations, it would have to be large enough to support a community of humans and a fully recycling ecosystem. A spacecraft of such a size would require much energy to accelerate and decelerate. A smaller spacecraft, while able to accelerate more easily and thus make higher cruise velocities more practical, would reduce exposure to cosmic radiation and the time for malfunctions to develop in the craft, but would have challenges with resource metabolic flow and ecologic balance.
Social breakdown
Generation ships traveling for long periods of time may see breakdowns in social structures. Changes in society (for example, mutiny) could occur over such periods and may prevent the ship from reaching its destination. This state was described by Algis Budrys in a 1966 book review:
Robert A. Heinlein's Orphans of the Sky (the "impeccable statement of this theme", Budrys said) and Brian Aldiss's Non-Stop (U.S. title: Starship) discussed such societies.
Cosmic rays
The radiation environment of deep space is very different from that on the Earth's surface, or in low Earth orbit, due to the much larger influx of high-energy galactic cosmic rays (GCRs). Like other ionizing radiation, high-energy cosmic rays can damage DNA and increase the risk of cancer, cataracts, and neurological disorders.
Ethical considerations
The success of a generation ship depends on children born aboard taking over the necessary duties, as well as having children themselves. Even if their quality of life might be better than, for example, that of people born into poverty on Earth, philosophy professor Neil Levy has raised the question of whether it is ethical to severely constrain life choices of individuals by locking them into a project they did not choose. A moral quandary exists regarding how intermediate generations, those destined to be born and die in transit without actually seeing tangible results of their efforts, might feel about their forced existence on such a ship.
Project Hyperion
Project Hyperion, launched in December 2011 by Andreas M. Hein, was to perform a preliminary study that defines integrated concepts for a crewed interstellar generation ship. This was a two-year study mainly based out of the WARR student group at the Technical University of Munich. The study aimed to provide an assessment of the feasibility of crewed interstellar flight using current and near-future technologies. It also aimed to guide future research and technology development plans as well as to inform the public about crewed interstellar travel. Notable results of the project include an assessment of world ship system architectures and adequate population size. The core team members have transferred to the Initiative for Interstellar Studies's world ship project and a survey paper on generation ships has been presented at the ESA Interstellar Workshop in 2019 as well as in ESA's Acta Futura journal.
See also
Aniara
Autarky
Embryo space colonization
O'Neill cylinder
Self-replicating spacecraft
Sleeper ship
References
Further reading
Caroti, Simone (2011). “The Generation Starship in Science Fiction: A Critical History, 1934-2001” Mcfarland. .
External links
Brief summary of the evolution of generation ship concepts.
Space colonization
Hypothetical spacecraft
Fictional spacecraft by type
Interstellar travel
Science fiction themes
Space farming
Self-sustainability | Generation ship | [
"Astronomy",
"Technology"
] | 1,728 | [
"Hypothetical spacecraft",
"Astronomical hypotheses",
"Interstellar travel",
"Exploratory engineering"
] |
13,987,435 | https://en.wikipedia.org/wiki/Pitrakinra | Pitrakinra (trade name Aerovant) is a 15-kDa human recombinant protein of wild-type human interleukin-4 (IL-4). It is an IL-4 and IL-13 antagonist that has been studied in a phase IIb clinical trial for the treatment of asthma. Two point mutations on pitrakinra (position 121 mutated from arginine to aspartic acid and position 124 mutated from tyrosine to aspartic acid) confer its ability to block signaling of IL-4 and interleukin-13 (IL-13) by preventing assembly of IL-4 receptor alpha (IL-4Rα) with either IL-2Rγ or IL-13Rα. Upregulation of Th2 cytokines, including IL-4 and IL-13, is thought to be critical for the allergic inflammation associated with atopic diseases such as asthma and eczema. The targets of pitrakinra action are inflammatory cells (dendritic cells, Th2 cells, B cells) and structural cells (smooth muscle, endothelium, epithelium) that express IL-4Rα.
The drug has been applied both as a subcutaneous injection and as an inhalation, but the latter formulation proved to be more effective.
Mechanism of action
Asthma results from a dysregulated, hyperresponsive immune response in the airways. Some immune cells in allergic asthmatics respond aggressively to foreign allergens with the release of IL-4 and 13, two key mediators that initiate a cycle of inflammation in the lung. Pitrakinra is an antagonist of the interleukin-4 receptor alpha chain, a protein that is also part of IL-13. It thereby blocks the inflammatory effects of IL-4 and IL-13, interrupting the Th2 lymphocyte immune response.
Pitrakinra protects allergic cynomolgus monkeys from allergen-induced airways hyper-responsiveness and lung eosinophilia in both prophylactic and therapeutic model settings. Subcutaneous injection of pitrakinra in human patients with severe atopic eczema for 4 weeks or more decreases the eczema clinical score and circulating IgE concentrations, and normalized T-cell subsets. Decreases in Forced Expiratory Volume in 1 s (FEV1) after allergen challenge after 4 weeks of inhalation of pitrakinra supports the hypothesis that dual inhibition of IL-4 and IL-13 can affect the course of the late asthmatic response after experimental allergen challenge. The reduced frequency of spontaneous asthma attacks requiring rescue medication use suggests that the use of pitrakinra improves the control over asthma symptoms.
In addition to improvements in the late asthmatic response, measurement of Fractional Expiratory Nitric Oxide (FENO) indicates that the resting inflammatory status of the lungs is significantly attenuated after inhalation of pitrakinra for 4 weeks. This result supports observations that IL-4 and other pro-inflammatory mediators induce nitric oxide synthase (iNOS) through STAT1 and STAT6 in epithelial cells. Basal FENO could be more dependent on up-regulation of iNOS by IL-4 and IL-13, whereas the increase in FENO after allergen challenge could involve additional pathways in epithelial cells and perhaps macrophages that are not affected by the inhibition of IL-4Rα. Pitrakinra may down-regulate baseline Th2 inflammation in the asthmatic lung while not interfering with the lung’s natural defenses in contact with large amounts of foreign allergen.
Adverse events
Pitrakinra is associated with few adverse effect, whether administered by subcutaneous injection or by inhalation in participants with atopic asthma or atopic eczema. The most common adverse event after subcutaneous administration is injection site-related discomfort, a common event with most injectable drugs. However, these events are neither associated with the development of antibodies nor with any discernible pattern (i.e., they were not more common at the end of the 4 weeks of exposure). There are also fewer spontaneous asthma attacks and respiratory-related adverse events that require rescue medication in participants who received pitrakinra subcutaneously than in those in the placebo group.
References
Antiasthmatic drugs
Recombinant proteins | Pitrakinra | [
"Biology"
] | 919 | [
"Recombinant proteins",
"Biotechnology products"
] |
13,998,668 | https://en.wikipedia.org/wiki/Cancer%20immunology | Cancer immunology (immuno-oncology) is an interdisciplinary branch of biology and a sub-discipline of immunology that is concerned with understanding the role of the immune system in the progression and development of cancer; the most well known application is cancer immunotherapy, which utilises the immune system as a treatment for cancer. Cancer immunosurveillance and immunoediting are based on protection against development of tumors in animal systems and (ii) identification of targets for immune recognition of human cancer.
Definition
Cancer immunology is an interdisciplinary branch of biology concerned with the role of the immune system in the progression and development of cancer; the most well known application is cancer immunotherapy, where the immune system is used to treat cancer. Cancer immunosurveillance is a theory formulated in 1957 by Burnet and Thomas, who proposed that lymphocytes act as sentinels in recognizing and eliminating continuously arising, nascent transformed cells. Cancer immunosurveillance appears to be an important host protection process that decreases cancer rates through inhibition of carcinogenesis and maintaining of regular cellular homeostasis. It has also been suggested that immunosurveillance primarily functions as a component of a more general process of cancer immunoediting.
Tumor antigens
Tumors may express tumor antigens that are recognized by the immune system and may induce an immune response. These tumor antigens are either TSA (Tumor-specific antigen) or TAA (Tumor-associated antigen).
Tumor-specific
Tumor-specific antigens (TSA) are antigens that only occur in tumor cells. TSAs can be products of oncoviruses like E6 and E7 proteins of human papillomavirus, occurring in cervical carcinoma, or EBNA-1 protein of EBV, occurring in Burkitt's lymphoma cells. Another example of TSAs are abnormal products of mutated oncogenes (e.g. Ras protein) and anti-oncogenes (e.g. p53).
Tumor-associated antigens
Tumor-associated antigens (TAA) are present in healthy cells, but for some reason they also occur in tumor cells. However, they differ in quantity, place or time period of expression. Oncofetal antigens are tumor-associated antigens expressed by embryonic cells and by tumors. Examples of oncofetal antigens are AFP (α-fetoprotein), produced by hepatocellular carcinoma, or CEA (carcinoembryonic antigen), occurring in ovarian and colon cancer. More tumor-associated antigens are HER2/neu, EGFR or MAGE-1.
Immunoediting
Cancer immunoediting is a process in which immune system interacts with tumor cells. It consists of three phases: elimination, equilibrium and escape. These phases are often referred to as "the three Es" of cancer immunoediting. Both adaptive and innate immune system participate in immunoediting.
In the elimination phase, the immune response leads to destruction of tumor cells and therefore to tumor suppression. However, some tumor cells may gain more mutations, change their characteristics and evade the immune system. These cells might enter the equilibrium phase, in which the immune system does not recognise all tumor cells, but at the same time the tumor does not grow. This condition may lead to the phase of escape, in which the tumor gains dominance over immune system, starts growing and establishes immunosuppressive environment.
As a consequence of immunoediting, tumor cell clones less responsive to the immune system gain dominance in the tumor through time, as the recognized cells are eliminated. This process may be considered akin to Darwinian evolution, where cells containing pro-oncogenic or immunosuppressive mutations survive to pass on their mutations to daughter cells, which may themselves mutate and undergo further selective pressure. This results in the tumor consisting of cells with decreased immunogenicity and can hardly be eliminated. This phenomenon was proven to happen as a result of immunotherapies of cancer patients.
Tumor evasion mechanisms
CD8+ cytotoxic T cells are a fundamental element of anti-tumor immunity. Their TCR receptors recognise antigens presented by MHC class I and when bound, the Tc cell triggers its cytotoxic activity. MHC I are present on the surface of all nucleated cells. However, some cancer cells lower their MHC I expression and avoid being detected by the cytotoxic T cells. This can be done by mutation of MHC I gene or by lowering the sensitivity to IFN-γ (which influences the surface expression of MHC I). Tumor cells also have defects in antigen presentation pathway, what leads into down-regulation of tumor antigen presentations. Defects are for example in transporter associated with antigen processing (TAP) or tapasin. On the other hand, a complete loss of MHC I is a trigger for NK cells. Tumor cells therefore maintain a low expression of MHC I.
Another way to escape cytotoxic T cells is to stop expressing molecules essential for co-stimulation of cytotoxic T cells, such as CD80 or CD86.
Tumor cells express molecules to induce apoptosis or to inhibit T lymphocytes:
Expression of FasL on its surface, tumor cells may induce apoptosis of T lymphocytes by FasL-Fas interaction.
Expression of PD-L1 on the surface of tumor cells leads to suppression of T lymphocytes by PD1-PD-L1 interaction.
Tumor cells have gained resistance to effector mechanisms of NK and cytotoxic CD8+ T cell:
by loss of gene expression or inhibition of apoptotic signal pathway molecules: APAF1, Caspase 8, Bcl-2-associated X protein (bax) and Bcl-2 homologous antagonist killer (bak).
by induction of expression or overexpression of antiapoptotic molecules: Bcl-2, IAP or XIAP.
Tumor microenvironment
Production of TGF-β by tumor cells and other cells (such as myeloid-derived suppressor cell) leads to conversion of CD4+ T cell into suppressive regulatory T cell (Treg) by a contact dependent or independent stimulation. In a healthy tissue, functioning Tregs are essential to maintain self-tolerance. In a tumor, however, Tregs form an immunosuppressive microenvironment.
Tumor cells produce special cytokines (such as colony-stimulating factor) to produce myeloid-derived suppressor cell. These cells are heterogenous collection of cell types including precursors of dendritic cell, monocyte and neutrophil. MDSC have suppressive effects on T-lymphocytes, dendritic cells and macrophages. They produce immunosuppressive TGF-β and IL-10.
Another producer of suppressive TGF-β and IL-10 are tumor-associated macrophages, these macrophages have mostly phenotype of alternatively activated M2 macrophages. Their activation is promoted by TH type 2 cytokines (such as IL-4 and IL-13). Their main effects are immunosuppression, promotion of tumor growth and angiogenesis.
Tumor cells have non-classical MHC class I on their surface, for example HLA-G. HLA-G is inducer of Treg, MDSC, polarise macrophages into alternatively activated M2 and has other immunosuppressive effects on immune cells.
Immunomodulation methods
Immune system is the key player in fighting cancer. As described above in mechanisms of tumor evasion, the tumor cells are modulating the immune response in their profit. It is possible to improve the immune response in order to boost the immunity against tumor cells.
monoclonal anti-CTLA4 and anti-PD-1 antibodies are called immune checkpoint inhibitors:
CTLA-4 is a receptor upregulated on the membrane of activated T lymphocytes, CTLA-4 CD80/86 interaction leads to switch off of T lymphocytes. By blocking this interaction with monoclonal anti CTLA-4 antibody we can increase the immune response. An example of approved drug is ipilimumab.
PD-1 is also an upregulated receptor on the surface of T lymphocytes after activation. Interaction PD-1 with PD-L1 leads to switching off or apoptosis. PD-L1 are molecules which can be produced by tumor cells. The monoclonal anti-PD-1 antibody is blocking this interaction thus leading to improvement of immune response in CD8+ T lymphocytes. An example of approved cancer drug is nivolumab.
Chimeric Antigen Receptor T cell
This CAR receptors are genetically engineered receptors with extracellular tumor specific binding sites and intracellular signalling domain that enables the T lymphocyte activation.
Cancer vaccine
Vaccine can be composed of killed tumor cells, recombinant tumor antigens, or dendritic cells incubated with tumor antigens (dendritic cell-based cancer vaccine)
Relationship to chemotherapy
Obeid et al. investigated how inducing immunogenic cancer cell death ought to become a priority of cancer chemotherapy. He reasoned, the immune system would be able to play a factor via a 'bystander effect' in eradicating chemotherapy-resistant cancer cells. However, extensive research is still needed on how the immune response is triggered against dying tumour cells.
Professionals in the field have hypothesized that 'apoptotic cell death is poorly immunogenic whereas necrotic cell death is truly immunogenic'. This is perhaps because cancer cells being eradicated via a necrotic cell death pathway induce an immune response by triggering dendritic cells to mature, due to inflammatory response stimulation. On the other hand, apoptosis is connected to slight alterations within the plasma membrane causing the dying cells to be attractive to phagocytic cells. However, numerous animal studies have shown the superiority of vaccination with apoptotic cells, compared to necrotic cells, in eliciting anti-tumor immune responses.
Thus Obeid et al. propose that the way in which cancer cells die during chemotherapy is vital. Anthracyclins produce a beneficial immunogenic environment. The researchers report that when killing cancer cells with this agent uptake and presentation by antigen presenting dendritic cells is encouraged, thus allowing a T-cell response which can shrink tumours. Therefore, activating tumour-killing T-cells is crucial for immunotherapy success.
However, advanced cancer patients with immunosuppression have left researchers in a dilemma as to how to activate their T-cells. The way the host dendritic cells react and uptake tumour antigens to present to CD4+ and CD8+ T-cells is the key to success of the treatment.
See also
Oncogenomics
References
Oncology
Branches of immunology | Cancer immunology | [
"Biology"
] | 2,318 | [
"Branches of immunology"
] |
3,272,644 | https://en.wikipedia.org/wiki/History%20of%20chemical%20engineering | Chemical engineering is a discipline that was developed out of those practicing "industrial chemistry" in the late 19th century. Before the Industrial Revolution (18th century), industrial chemicals and other consumer products such as soap were mainly produced through batch processing. Batch processing is labour-intensive and individuals mix predetermined amounts of ingredients in a vessel, heat, cool or pressurize the mixture for a predetermined length of time. The product may then be isolated, purified and tested to achieve a saleable product. Batch processes are still performed today on higher value products, such as pharmaceutical intermediates, specialty and formulated products such as perfumes and paints, or in food manufacture such as pure maple syrups, where a profit can still be made despite batch methods being slower and inefficient in terms of labour and equipment usage. Due to the application of Chemical Engineering techniques during manufacturing process development, larger volume chemicals are now produced through continuous "assembly line" chemical processes. The Industrial Revolution was when a shift from batch to more continuous processing began to occur. Today commodity chemicals and petrochemicals are predominantly made using continuous manufacturing processes whereas speciality chemicals, fine chemicals and pharmaceuticals are made using batch processes.
Origin
The Industrial Revolution led to an unprecedented escalation in demand, both with regard to quantity and quality, for bulk chemicals such as soda ash. This meant two things: one, the size of the activity and the efficiency of operation had to be enlarged, and two, serious alternatives to batch processing, such as continuous operation, had to be examined.
The first chemical engineer
Industrial chemistry was being practiced in the 1800s, and its study at British universities began with the publication by Friedrich Ludwig Knapp, Edmund Ronalds and Thomas Richardson of the important book Chemical Technology in 1848. By the 1880s the engineering elements required to control chemical processes were being recognized as a distinct professional activity. Chemical engineering was first established as a profession in the United Kingdom after the first chemical engineering course was given at the University of Manchester in 1887 by George E. Davis in the form of twelve lectures covering various aspects of industrial chemical practice. As a consequence George E. Davis is regarded as the world's first chemical engineer. Today, chemical engineering is a highly regarded profession. Chemical engineers with experience can become licensed Professional Engineers in the United States, aided by the National Society of Professional Engineers, or gain "Chartered" chemical-engineer status through the UK-based Institution of Chemical Engineers.
Professional associations
In 1880, the first attempt was made to form a Society of Chemical Engineers in London. This eventually resulted in the formation of the Society of Chemical Industry in 1881. The American Institute of Chemical Engineers (AIChE) was founded in 1908, and the UK Institution of Chemical Engineers (IChemE) in 1922. These both now have substantial international membership. Some other countries now have chemical engineering societies or sections within chemical or engineering societies, but the AIChE, IChemE and IiChE remain the major ones in numbers and international spread: they are both open to suitably qualified professionals or students of chemical engineering anywhere in the world.
Definitions
For the other established branches of engineering, there were ready associations in the public's mind: Mechanical Engineering meant machines, Electrical Engineering meant circuitry, and Civil Engineering meant structures. Chemical engineering came to mean chemicals production.
Unit operation
Arthur Dehon Little is credited with the approach chemical engineers to this day take: process-oriented rather than product-oriented analysis and design. The concept of unit operations was developed to emphasize the underlying similarity among seemingly different chemical productions. For example, the principles are the same whether one is concerned about separating alcohol from water in a fermenter, or separating gasoline from diesel in a refinery, as long as the basis of separation is generation of a vapor of a different composition from the liquid. Therefore, such separation processes can be studied together as a unit operation, in this case called distillation.
Unit processes
In the early part of the last century, a parallel concept called Unit Processes was used to classify reactive processes. Thus oxidations, reductions, alkylations, etc. formed separate unit processes and were studied as such. This was natural considering the close affinity of chemical engineering to industrial chemistry at its inception. Gradually however, the subject of chemical reaction engineering has largely replaced the unit process concept. This subject looks at the entire body of chemical reactions as having a personality of its own, independent of the particular chemical species or chemical bonds involved. The latter does contribute to this personality in no small measure, but to design and operate chemical reactors, a knowledge of characteristics such as rate behaviour, thermodynamics, single or multiphase nature, etc. are more important. The emergence of chemical reaction engineering as a discipline signaled the severance of the umbilical cord connecting chemical engineering to industrial chemistry and cemented the unique character of the discipline.
See also
George E. Davis
Chemical Industry
Chemical plant
commodity chemicals
speciality chemicals
fine chemicals
Institution of Chemical Engineers
Northeast of England Process Industry Cluster
References
Further reading
William Furter (ed) (1982) A Century of Chemical Engineering, Plenum Press (New York)
Colin Divall & Sean F. Johnston (2000) Scaling Up; The Institution of Chemical Engineers and the Rise of a New Profession, Kluwer Academic (Dordrecht, Netherlands)
External links
"History of ChEn: Struggle for Survival"
"About AIChE" (from www.stevens-tech.edu)
Chemical engineering
History of the chemical industry | History of chemical engineering | [
"Chemistry",
"Engineering"
] | 1,109 | [
"Chemical engineering",
"History of the chemical industry",
"nan"
] |
3,275,398 | https://en.wikipedia.org/wiki/Bird%20intelligence | The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. Furthermore, bird brains have two-to-four times the neuron packing density of mammal brains, for higher overall efficiency. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli.
The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies.
Studies
Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities.
Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6.
Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural History magazine:
Many birds are also able to detect changes in the number of eggs in their nest and brood. Parasitic cuckoos are often known to remove one of the host eggs before laying their own.
Associative learning
Visual or auditory signals and their association with food and other rewards have been well studied, and birds have been trained to recognize and distinguish complex shapes. This may be an important ability which aids their survival.
Associative learning is a method often used on animals to assess cognitive abilities. Bebus et al. define associative learning as "acquiring knowledge of a predictive or causal relationship (association) between two stimuli, responses or events." A classic example of associative learning is Pavlovian conditioning. In avian research, performance on simple associative learning tasks can be used to assess how cognitive abilities vary with experimental measures.
Associative learning vs. reversal learning
Bebus et al. demonstrated that associative learning in Florida scrub-jays correlated with reversal learning, personality, and baseline hormone levels. To measure associative learning abilities, they associated coloured rings to food rewards. To test reversal learning, the researchers simply reversed the rewarding and non-rewarding colours to see how quickly the scrub-jays would adapt to the new association. Their results suggest that associative learning is negatively correlated to reversal learning. In other words, birds that learned the first association quickly were slower to learn the new association upon reversal. The authors conclude that there must be a trade-off between learning an association and adapting to a new association.
Neophobia
Bebus et al. also showed that reversal learning was correlated with neophobia: birds that were afraid of a novel environment previously set up by the researchers were faster at reversal learning. The inverse correlation, where less neophobic birds performed better on the associative learning task, was measured but was not statistically significant. Opposite results were found by Guido et al., who showed that neophobia in Milvago chimango, a bird of prey native to South America, negatively correlated to reversal learning. In other words, neophobic birds were slower at reversal learning. The researchers suggested a modern explanation for this discrepancy: since birds living near urban areas benefit from being less neophobic to feed on human resources (such as detritus), but also benefit from being flexible learners (since human activity fluctuates), perhaps low neophobia coevolved with high reversal learning ability. Therefore, personality alone might be insufficient to predict associative learning due to contextual differences.
Hormones
Bebus et al. found a correlation between baseline hormone levels and associative learning. According to their study, low baseline levels of corticosterone (CORT), a hormone involved in stress response, predicted better associative learning. In contrast, high baseline levels of CORT predicted better reversal learning. In summary, Bebus et al. found that low neophobia (not statistically significant) and low baseline CORT levels predicted better associative learning abilities. Inversely, high neophobia and high baseline CORT levels predicted better reversal learning abilities.
Diet
In addition to reversal learning, personality, and hormone levels, further research suggests that diet may also correlate with associative learning performance. Bonaparte et al. demonstrated that high-protein diets in zebra finches correlated with better associative learning. The researchers showed that high-diet treatment was associated with larger head width, tarsus length, and body mass in the treated males. In subsequent testing, researchers showed that high-diet and larger head-to-tarsus ratio correlated with better performance on an associative learning task. The researchers used associative learning as a correlate of cognition to support that nutritional stress during development can negatively impact cognitive development which in turn may reduce reproductive success. One such way that poor diet may affect reproductive success is through song learning. According to the developmental stress hypothesis, zebra finches learn songs during a stressful period of development and their ability to learn complex songs reflects their adequate development.
Contradicting results by Kriengwatana et al. found that low food diet in zebra finches prior to nutritional independence (that is, before the birds are able to feed themselves) enhanced spatial associative learning, impaired memory, and had no effect on neophobia. They also failed to find a correlation between physiological growth and associative learning. Though Bonaparte et al. focused on protein content whereas Kriengwatana et al. focused on quantity of food, the results seem contradictory. Further research should be conducted to clarify the relationship between diet and associative learning.
Ecology
Associative learning may vary across species depending on their ecology. According to Clayton and Krebs, there are differences in associative learning and memory between food-storing and non-storing birds. In their experiment, food-storing jays and marsh tits and non-storing jackdaws and blue tits were introduced to seven sites, one of which contained a food reward. For the first phase of the experiment, the bird randomly searched for the reward between the seven sites, until it found it and was allowed to partially consume the food item. All species performed equally well in this first task. For the second phase of the experiment, the sites were hidden again and the birds had to return to the previously rewarding site to obtain the remainder of the food item. The researchers found that food-storing birds performed better in phase two than non-storing birds. While food-storing birds preferentially returned to the rewarding sites, non-storing birds preferentially returned to previously visited sites, regardless of the presence of a reward. If the food reward was visible in phase one, there was no difference in performance between storers and non-storers. These results show that memory following associative learning, as opposed to just learning itself, can vary with ecological lifestyle.
Age
Associative learning correlates with age in Australian magpies according to Mirville et al. In their study, the researchers initially wanted to study the effect of group size on learning. However, they found that group size correlated with the likelihood of interaction with the task, but not with associative learning itself. Instead, they found that age played a role on performance: adults were more successful at completing the associative learning task, but less likely to approach the task initially. Inversely, juveniles were less successful at completing the task, but more likely to approach it. Therefore, adults in larger groups were the most likely individuals to complete the task due to their increased likelihood to both approach and succeed on the task.
Weight
Though it may seem universally beneficial to be a fast learner, Madden et al. suggested that the weight of individuals affected whether or not associative learning was adaptive. The researchers studied common pheasants and showed that heavy birds that performed well on associative tasks had an increased probability of survival to four months old after being released into the wild, whereas light birds that performed well on associative tasks were less likely to survive. The researchers provide two explanations for the effect of weight on the results: perhaps larger individuals are more dominant and benefit from novel resources more than smaller individuals or they simply have a higher survival rate compared to smaller individuals due to bigger food reserves, difficulty for predators to kill them, increased motility, etc. Alternatively, ecological pressures may affect smaller individuals differently. Associative learning might be more costly on smaller individuals, thus reducing their fitness and leading to maladaptive behaviours. Additionally, Madden et al. found that slow reversal learning in both groups correlated with low survival rate. The researchers suggested a trade-off hypothesis where the cost of reversal learning would inhibit the development of other cognitive abilities. According to Bebus et al., there is a negative correlation between associative learning and reversal learning. Perhaps low reversal learning correlates to better survival due to enhanced associative learning. Madden et al. also suggested this hypothesis but note their skepticism since they could not show the same negative correlation between associative and reversal learning found by Bebus et al.
Neural representations
In their research, Veit et al. show that associative learning modified NCL (nidopallium caudolaterale) neuronal activity in crows. To test this, visual cues were presented on a screen for 600ms, followed by a 1000ms delay. After the delay, a red stimulus and a blue stimulus were presented simultaneously and the crows had to choose the correct one. Choosing the correct stimulus was rewarded with a food item. As the crows learned the associations through trial and error, NCL neurons showed increased selective activity for the rewarding stimulus. In other words, a given NCL neuron that fired when the correct stimulus was the red one increased its firing rate selectively when the crow had to choose the red stimulus. This increased firing was observed during the delay period during which the crow was presumably thinking about which stimulus to choose. Additionally, increased NCL activity reflected the crow's increased performance. The researchers suggest that NCL neurons are involved in learning associations as well as making the subsequent behavioural choice for the rewarding stimulus.
Olfactory associative learning
Though most research is concerned with visual associative learning, Slater and Hauber showed that birds of prey are also able to learn associations using olfactory cues. In their study, nine individuals from five species of birds of prey learned to pair a neutral olfactory cue to a food reward.
Spatial and temporal abilities
A common test of intelligence is the detour test, where a glass barrier between the bird and an item such as food is used in the setup. Most mammals discover that the objective is reached by first going away from the target. Whereas domestic fowl fail on this test, many within the crow family are readily able to solve the problem.
Large fruit-eating birds in tropical forests depend on trees which bear fruit at different times of the year. Many species, such as pigeons and hornbills, have been shown to be able to decide upon foraging areas according to the time of the year. Birds that show food hoarding behavior have also shown the ability to recollect the locations of food caches. Nectarivorous birds such as hummingbirds also optimize their foraging by keeping track of the locations of good and bad flowers. Studies of western scrub jays also suggest that birds may be able to plan ahead. They cache food according to future needs and at the risk of not being able to find the food on subsequent days.
Many birds follow strict time schedules in their activities. These are often dependent upon environmental cues. Birds also are sensitive to day length, and this awareness is especially important as a cue for migratory species. The ability to orient themselves during migrations is typically attributed to birds' superior sensory abilities, rather than to intelligence.
Beat induction
Research published in 2008 that was conducted with an Eleonora cockatoo named Snowball has shown that birds can identify the rhythmic beat of man-made music, an ability known as beat induction.
Self-awareness
The mirror test gives insight into whether an animal is conscious of itself and able to distinguish itself from other animals by determining whether it possesses or lacks the ability to recognize itself in its own reflection. Mirror self-recognition has been demonstrated in European magpies, making them one of only a few animal species to possess this capability. In 1981, Epstein, Lanza, and Skinner published a paper in the journal Science in which they argued that pigeons also pass the mirror test. A pigeon was trained to look in a mirror to find a response key behind it which it then turned to peck—food was the consequence of a correct choice (i.e., the pigeon learned to use a mirror to find critical elements of its environment). Next, the bird was trained to peck at dots placed on its feathers; food was, again, the consequence of touching the dot. This was done without a mirror. Then a small bib was placed on the pigeon—enough to cover a dot placed on its lower belly. A control period without the mirror yielded no pecking at the dot. But when the mirror was shown, the pigeon became active, looked into it, and then tried to peck on the dot under the bib.
Despite this, pigeons are not classified as being able to recognize their reflection, because only trained pigeons have been shown to pass the mirror test. The animal must demonstrate they can pass the test without prior experience or training with the testing procedure.
Some studies have suggested that birds—separated from mammals by over 300 million years of independent evolution—have developed brains capable of primate-like consciousness through a process of convergent evolution. Although avian brains are structurally very different from the brains of cognitively advanced mammals, each has the neural circuitry associated with higher-level consciousness, according to a 2006 analysis of the neuroanatomy of consciousness in birds and mammals. The study acknowledges that similar neural circuitry does not by itself prove consciousness, but notes its consistency with suggestive evidence from experiments on birds' working and episodic memories, sense of object permanence, and theory of mind (both covered below).
Tool use
Many birds have been shown to be capable of using tools. The definition of a tool has been debated. One proposed definition of tool use was defined by T. B. Jones and A. C. Kamil in 1973 as
By this definition, a bearded vulture (lammergeier) dropping a bone on a rock would not be using a tool since the rock cannot be seen as an extension of the body. However, the use of a rock manipulated using the beak to crack an ostrich egg would qualify the Egyptian vulture as a tool user. Many other species, including parrots, corvids, and a range of passerines, have been noted as tool users.
New Caledonian crows have been observed in the wild using sticks with their beaks to extract insects from logs. While young birds in the wild normally learn this technique from elders, a laboratory crow named Betty improvised a hooked tool from a wire with no prior experience, the only known species other than humans to do so. In 2014, a New Caledonian crow named "007" by researchers from the University of Auckland in New Zealand solved an eight-step puzzle to get to some food. Crows also fashion their own tools, the only bird that does so, out of the leaves of pandanus trees. Researchers have discovered that New Caledonian crows don't just use single objects as tools; they can also construct novel compound tools through assemblage of otherwise non-functional elements. The woodpecker finch from the Galapagos Islands also uses simple stick tools to assist it in obtaining food. In captivity, a young Española cactus finch learned to imitate this behavior by watching a woodpecker finch in an adjacent cage.
Carrion crows (Corvus corone orientalis) in urban Japan and American crows (C. brachyrhynchos) in the United States have innovated a technique to crack hard-shelled nuts by dropping them onto crosswalks and letting them be run over and cracked by cars. They then retrieve the cracked nuts when the cars are stopped at the red light. Macaws have been shown to utilize rope to fetch items that would normally be difficult to reach. Striated herons (Butorides striatus) use bait to catch fish.
Observational learning
Using rewards to reinforce responses is often used in laboratories to test intelligence. However, the ability of animals to learn by observation and imitation is considered more significant. Ravens have been noted for their ability to learn from each other.
Scientists have discovered that birds know to avoid the plants where toxic animals dwell. A University of Bristol team have shown for the very first time that birds do not just learn the colours of dangerous prey, they can also learn the appearance of the plants such insects live on.
Brain anatomy
At the beginning of the 20th century, scientists argued that birds had hyper-developed basal ganglia, with tiny mammalian-like telencephalon structures. Modern studies have refuted this view. The basal ganglia only occupy a small part of the avian brain. Instead, it seems that birds use a different part of their brain, the medio-rostral neostriatum/hyperstriatum ventrale (see also nidopallium), as the seat of their intelligence, and the brain-to-body size ratio of psittacines (parrots) and corvines (birds of the crow family) is actually comparable to that of higher primates. Birds can also have twice the neuron packing density of primate brains, in some cases similar to the total number of neurons in much larger mammal brains, for a higher unit mass per volume. This suggests that the nuclear architecture of the avian brain has more efficient neuron packing and interconnections than mammal brains. The avian pallium's neuroarchitecture is reminiscent of the mammalian cerebral cortex, and has been suggested to be an equivalent neural basis for consciousness.
Studies with captive birds have given insight into which birds are the most intelligent. While parrots have the distinction of being able to mimic human speech, studies with the grey parrot have shown that some are able to associate words with their meanings and form simple sentences (see Alex). Parrots and the corvid family of crows, ravens, and jays are considered the most intelligent of birds. Research has shown that these species tend to have the largest high vocal centers. Dr. Harvey J. Karten, a neuroscientist at UCSD who has studied the physiology of birds, has discovered that the lower parts of avian brains are similar to those of humans.
Social behavior
Social life has been considered a driving force for the evolution of intelligence in various types of animals. Many birds have social organizations, and loose aggregations are common. Many corvid species separate into small family groups or "clans" for activities such as nesting and territorial defense. The birds then congregate in massive flocks made up of several different species for migratory purposes. Some birds make use of teamwork while hunting. Predatory birds hunting in pairs have been observed using a "bait and switch" technique, whereby one bird will distract the prey while the other swoops in for the kill.
Social behavior requires individual identification, and most birds appear to be capable of recognizing mates, siblings, and young. Other behaviors such as play and cooperative breeding are also considered indicators of intelligence.
Crows appear to be able to remember who observed them catching food. They also steal food caught by others.
In some fairy-wrens, such as the superb and red-backed, males pick flower petals in colors contrasting with their bright nuptial plumage and present them to others of their species that will acknowledge, inspect, and sometimes manipulate the petals. This function seems not linked to sexual or aggressive activity in the short and medium term thereafter, though its function is apparently not aggressive and quite possibly sexual.
A study in 2023 found that some parrots in captivity could be trained to make video calls to each other. The parrots would ring a bell whenever they wanted to make a video call, and then chose the parrot on the screen they wanted to interact with. The parrots seemed to understand that another parrot existed on-screen and even learned new skills from each other, such as flying, foraging, and new sounds.
Communication
Birds communicate with their flockmates through song, calls, and body language. Studies have shown that the intricate territorial songs of some birds must be learned at an early age, and that the memory of the song will serve the bird for the rest of its life. Some bird species are able to communicate in several regional varieties of their songs. For example, the New Zealand saddleback will learn the different song "dialects" of clans of its own species, much as human beings might acquire diverse regional dialects. When a territory-owning male of the species dies, a young male will immediately take his place, singing to prospective mates in the dialect appropriate to the territory he is in. Similarly, around 300 tui songs have been recorded. The greater the competition in the area, it has been suggested, the more likely the birds are to actually create or make their song more complex.
Recent studies indicate that some birds may have an ability to memorize "syntactic" patterns of sounds, and that they can be taught to reject the ones determined to be incorrect by the human trainers. These experiments were carried out by combining whistles, rattles, warbles, and high-frequency motifs.
Crows have been studied for their ability to understand recursion.
Conceptual abilities
Evidence that birds can form abstract concepts such as "same vs. different" has been provided by a grey parrot named Alex. Alex was trained by animal psychologist Irene Pepperberg to vocally label more than 100 objects of different colors and shapes and which are made from different materials. Alex could also request or refuse these objects ("I want X") and quantify numbers of them. Alex was also used as a "teacher" for other younger grey parrots in Irene Pepperberg's lab. Alex would observe and listen to the training on many occasions, verbally correcting the younger learning parrot or calling out a correct answer before the learner could give a response.
Macaws have been demonstrated to comprehend the concept of "left" and "right".
Object permanence
Macaws, carrion crows, and chickens have been demonstrated to fully comprehend the concept of object permanence at a young age. Macaws will even refute the "A-not-B error". If they are shown an item, especially one with whose purpose they are familiar, they will search logically for where it could be feasibly placed. One test for this was done as follows: a macaw was shown an item; the item was then hidden behind the back of the trainer and placed in a container unfamiliar to the bird. Without the macaw watching, multiple objects were spread out on a table, including that container and another container. The macaw searched the target container, then the other, before returning to open the correct container; thereby demonstrating knowledge of and the ability to search for the item.
Theory of mind
A study on the little green bee-eater suggests that these birds may be able to see from the point of view of a predator. The brown-necked raven has been observed hunting lizards in complex cooperation with other ravens, demonstrating an apparent understanding of prey behavior. The California scrub jay hides caches of food and will later re-hide food if it was watched by another bird the first time, but only if the bird hiding the food has itself stolen food before from a cache. A male Eurasian jay takes into account which food his bonded partner prefers to eat when feeding her during courtship feeding rituals. Such an ability to see from the point of view of another individual and to attribute motivations and desires had previously been attributed only to the great apes and elephants.
Conservation
Avian innovation and creativity may lead to more robust populations. Canadian biologist Louis Lefebvre states: "We have to do what we can to prevent habitat destruction and extinction of species, but there's a little bit of hope out there in how the species are able to respond". A 2020 study found that behavioral plasticity is associated with reduced extinction risk in birds.
See also
Animal intelligence
Pigeon intelligence
Evolution of cognition
Pig intelligence
References
External links
An overview of the brain at the Life of Birds website, pbs.org
The anatomy of a bird brain, earthlife.net
Animal intelligence
Ornithology
Intelligence | Bird intelligence | [
"Biology"
] | 5,316 | [
"Behavior by type of animal",
"Behavior",
"Bird behavior"
] |
17,990,982 | https://en.wikipedia.org/wiki/Diffusion%20gradient | A diffusion gradient is a gradient in the rates of diffusion of multiple groups of molecules through a medium or substrate. The groups of molecules may constitute multiple substances, portions of the same substance that have different temperatures, or other differentiable groupings. The analysis of diffusion gradients has applications in many sciences and technologies, as described for the following contexts:
Double diffusive convection, in which density differences, often reflecting temperature differences, affect fluid flows
Diffusion MRI, which visualizes tissues on the basis of diffusion gradients of various molecules, especially water molecules
Immunodiffusion, which can use diffusion rate differentials to separate multiple immune complex species
Broad-concept articles
Diffusion | Diffusion gradient | [
"Physics",
"Chemistry"
] | 136 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
17,993,374 | https://en.wikipedia.org/wiki/Iron%28III%29%20chromate | Iron(III) chromate is the iron(III) salt of chromic acid with the chemical formula Fe2(CrO4)3.
Discovery
Iron(III) chromate was discovered by Samuel Hibbert-Ware in 1817 while visiting Shetland.
Production
It may be formed by the salt metathesis reaction of potassium chromate and iron(III) nitrate, which gives potassium nitrate as byproduct.
2 Fe(NO3)3 + 3 K2CrO4 → Fe2(CrO4)3 + 6 KNO3
It also can be formed by the oxidation by air of iron and chromium oxides in a basic environment:
4 Fe2O3 + 6 Cr2O3 + 9 O2 → 4 Fe2(CrO4)3
References
Chromates
Iron(III) compounds
Oxidizing agents | Iron(III) chromate | [
"Chemistry"
] | 179 | [
"Inorganic compounds",
"Redox",
"Oxidizing agents",
"Inorganic compound stubs",
"Salts",
"Chromates"
] |
17,996,959 | https://en.wikipedia.org/wiki/Sustainable%20consumption | Sustainable consumption (sometimes abbreviated to "SC") is the use of products and services in ways that minimizes impacts on the environment.
Sustainable consumption can be undertaken in such a way that needs are met for present-day humans and also for future generations. Sustainable consumption is often paralleled with sustainable production; consumption refers to use and disposal (or recycling) not just by individuals and households, but also by governments, businesses, and other organizations. Sustainable consumption is closely related to sustainable production and sustainable lifestyles. "A sustainable lifestyle minimizes ecological impacts while enabling a flourishing life for individuals, households, communities, and beyond. It is the product of individual and collective decisions about aspirations and about satisfying needs and adopting practices, which are in turn conditioned, facilitated, and constrained by societal norms, political institutions, public policies, infrastructures, markets, and culture."
The United Nations includes analyses of efficiency, infrastructure, and waste, as well as access to basic services, green and decent jobs, and a better quality of life for all within the concept of sustainable consumption. Sustainable consumption shares a number of common features and is closely linked to sustainable production and sustainable development. Sustainable consumption, as part of sustainable development, is part of the worldwide struggle against sustainability challenges such as climate change, resource depletion, famines, and environmental pollution.
Sustainable development as well as sustainable consumption rely on certain premises such as:
Effective use of resources, and minimization of waste and pollution
Use of renewable resources within their capacity for renewal
The reuse and upcycling of product life-cycles so that consumer items are utilized to maximum potential
Intergenerational and intragenerational equity
Goal 12 of the Sustainable Development Goals seeks to "ensure sustainable consumption and production patterns".
Consumption shifting
Studies found that systemic change for "decarbonization" of humanity's economic structures or root-cause system changes above politics are required for a substantial impact on global warming. Such changes may result in more sustainable lifestyles, along with associated products, services and expenditures, being structurally supported and becoming sufficiently prevalent and effective in terms of collective greenhouse gas emission reductions.
Nevertheless, ethical consumerism usually only refers to individual choices, and not the consumption behavior and/or import and consumption policies by the decision-making of nation-states. These have however been compared for road vehicles, emissions (albeit without considering emissions embedded in imports) and meat consumption per capita as well as by overconsumption.
Life-cycle assessments could assess the comparative sustainability and overall environmental impacts of products – including (but not limited to): "raw materials, extraction, processing and transport; manufacturing; delivery and installation; customer use; and end of life (such as disposal or recycling)".
Sustainable food consumption
The environmental impacts of meat production (and dairy) are large: raising animals for human consumption accounts for approximately 40% of the total amount of agricultural output in industrialized countries. Grazing occupies 26% of the Earth's ice-free terrestrial surface, and feed crop production uses about one third of all arable land. A global food emissions database shows that food systems are responsible for one third of the global anthropogenic GHG emissions. Moreover, there can be competition for resources, such as land, between growing crops for human consumption and growing crops for animals, also referred to as "food vs. feed" (see also: food security).
Therefore, sustainable consumption also includes food consumption – shifting to more sustainable diets.
Novel foods such as under-development cultured meat and dairy, existing small-scale microbial foods and ground-up insects (see also: pet food and animal feed) are shown to have the potential to reduce environmental impacts by over 80% in a study. Many studies such as a 2019 IPCC report and a 2022 review about meat and sustainability of food systems, animal welfare, and healthy nutrition concluded that meat consumption has to be reduced substantially for sustainable consumption. The review names broad potential measures such as "restrictions or fiscal mechanisms". In , science advisors in the European Commission's Scientific Advice Mechanism came to the identical conclusion, finding that "our diets need to shift towards more plant-based ingredients, rich in vegetables, fruits, wholegrains and pulses. Our diets should be limited in red meat, processed meat, salt, added sugar, and high-fat animal products, while fish and seafood should be sourced from sustainably managed stocks".
A considerable proportion of consumers of food produced by the food system may be non-livestock animals such as pet-dogs: the global dog population is estimated to be 900 million, of which around 20% are regarded as owned pets. Sustainable consumption may also involve their feed. Beyond reduction of meat consumption, the composition of livestock feed and fish feed may also be subject of sustainable consumption shifts.
Product labels
The app CodeCheck gives versed smartphone users some capability to scan ingredients in food, drinks and cosmetics for filtering out some of the products that are legal but nevertheless unhealthy or unsustainable from their consumption/purchases. A similar "personal shopping assistant" has been investigated in a study. Studies indicated a low level of use of sustainability labels on food. Moreover, existing labels have been intensely criticized for invalidity or unreliability, often amounting to greenwashing or being ineffective.
In one study, individuals were given a set budget, "which could be spent once a week on a wide range of food and drink products", then data "on each item's carbon footprint was clearly presented, and individuals could view the [unlimited] carbon footprint of their supermarket basket on their shopping bill."
The processes of consumption
Not only selection, quantity and quality of consumed products may be of relevance to sustainable consumption, the process of consumption, including how selected products are distributed or gathered could be considered a component of it as well: for instance, ordering from a local store online could substantially reduce emissions (in terms of transportation emissions and when not considering which options are available). Bundling items could reduce carbon emissions of deliveries and carbon footprints of in-person shopping-trips can be eliminated e.g. by biking to the shop instead of driving.
Product information transparency and trade control
If information is linked to products e.g. via a digital product passport, along with proper architecture and governance for data sharing and data protection, it could help achieve climate neutrality and foster dematerialization. In the EU, a Digital Product Passport is being developed. When there is an increase in greenhouse gas emissions in one country as a result of an emissions reduction by a second country with a strict climate policy this is referred to as carbon leakage. In the EU, the proposed Carbon Border Adjustment Mechanism could help mitigate this problem, and possibly increase the capacity to account for imported pollution/harm/death-footprints. Footprints of nondomestic production are significant: for instance, a study concluded that PM2.5 air pollution induced by the contemporary free trade and consumption by the G20 nations causes two million premature deaths annually, suggesting that the average lifetime consumption of about ~28 people in these countries causes at least one premature death (average age ~67) while developing countries "cannot be expected" to implement or be able to implement countermeasures without external support or internationally coordinated efforts.
Transparency of supply chains is important for global goals such as ending net-deforestation. Policy-options for reducing imported deforestation also include "Lower/raise import tariffs for sustainably/unsustainably produced commodities" and "Regulate imports, e.g., through quotas, bans, or preferential access agreements". However, several theories of change of policy options rely on (true / reliable) information being available/provided to "shift demand—both intermediate and final—either away from imported [forest-risk commodities (FRC)] completely, e.g., through diet shifts (IC1), or to sustainably produced FRCs, e.g., through voluntary or mandatory supply-chain transparency (IS1, RS2)."
As of 2021, one approach under development is binary "labelling" of investments as "green" according to an EU governmental body-created "taxonomy" for voluntarily financial investment redirection/guidance based on this categorization. The company Dayrize is one organization that attempts to accurately assess environmental and social impacts of consumer products.
Reliable evaluations and categorizations of products may enable measures such as policy-combinations that include transparent criteria-based eco-tariffs, bans (import control), support of selected production and subsidies which shifts, rather than mainly reduces, consumption. International sanctions during the 2022 Russian invasion of Ukraine included restrictions on Russian fossil fuel imports while supporting alternatives, albeit these sanctions were not based
on environment-related qualitative criteria of the products.
Fairness and income/spending freedoms
The bottom half of the population is directly responsible for less than 20% of energy footprints and consume less than the top 5% in terms of trade-corrected energy. High-income individuals usually have higher energy footprints as they disproportionally use their larger financial resourceswhich they can usually spend freely in their entirety for any purpose as long as the end user purchase is legalfor energy-intensive goods. In particular, the largest disproportionality was identified to be in the domain of transport, where e.g. the top 10% consume 56% of vehicle fuel and conduct 70% of vehicle purchases.
Techniques and approaches
Choice editing refers to the active process of controlling or limiting the choices available to consumers.
Personal Carbon Allowances (PCAs) refers to technology-based schemes to ration GHG emissions.
Degrowth
Degrowth refers to economic paradigms that address the need to reduce global consumption and production whereby metrics and mechanisms like GDP are replaced by more reality-attached measures such as of health, social and environmental well-being and more needs-based structures. Broadly, degrowth would or does aim to address overconsumption "by addressing real need, reducing wants, ensuring greater distributive equality and ultimately by suppressing production", or "downscaling of production and consumption that increases human wellbeing and enhances [i.e. "grows"] ecological conditions and equity on the planet".
A common denominator of degrowth is a decline in the metric GDP. More concrete degrowth proposals are diverse, dispersed throughout the growing body of literature and include:
"reducing and redistributing income alone" along with GHG-pricing and wealth redistribution into a global food systems transformation
One tool that could possibly be used in large-scale policies is an app that "will guide users to prioritize reduction in high-footprint categories".
Another broad proposal suggests that "different roles of labour, work, and action should be acknowledged and scrutinized in detail" which could prompt or be necessary for an "organization of an alternative society" (see also: green job, life-cycle assessment, certification and job evaluation)
Consumption such as "domestic water consumption" could be [made to be] considered as a collectively ordered activity especially when such data and contextual education is available the respective collective.
Demonetized activities [as well as currently financially unrewarded and unprofitable activities] are important for degrowth.
Degrowth also emphasizes the need to 'degrow' various sectors of the economy without a negative connotation usually associated with such measures such as at least temporary job-loss. If no immediate retraining occurs, leisure time may increase at least temporarily. There are some suggestions that in general, increases in leisure time do not per se translate to increased sustainability – in particular that some time saved did not decrease total distance of car travel.
Degrowth-related economic concepts
A study suggests that the concepts of sharing economy and circular economy on their own, while useful as broad components, are insufficient and ineffective.
Economic concepts by which scholarly literature approaches problems such as overconsumption, using this terminology to characterize broad, typically conceptual-stage, solution-proposals include:
Doughnut economy (see also: planetary boundaries)
Community economy and commons (see also: Commons#Economic theories and Gemeinwohl-Ökonomie)
Strong and weak sustainable consumption
Some writers make a distinction between "strong" and "weak" sustainability.
Strong sustainable consumption refers to participating in viable environmental activities, such as consuming renewable and efficient goods and services (such as electric locomotive, cycling, renewable energy). Strong sustainable consumption also refers to an urgency to reduce individual living space and consumption rate.
Weak sustainable consumption is the failure to adhere to strong sustainable consumption. In other words, consumption of highly pollutant activities, such as frequent car use and consumption of non-biodegradable goods (such as plastic items, metals, and mixed fabrics).
In 1992, the United Nations Conference on Environment and Development (UNCED), also referred to as the Earth Summit, recognized sustainable consumption as a concept. It also recognized the difference between strong and weak sustainable consumption but set efforts away from strong sustainable consumption.
The 1992 Earth Summit found that sustainable consumption rather than sustainable development was . Currently, strong sustainable consumption is only present in of discussion and research. International government organizations’ (IGOs) prerogatives have kept away from strong sustainable consumption. To avoid scrutiny, IGOs have deemed their influences as limited, often aligning its interests with consumer wants and needs. In doing so, they advocate for minimal eco-efficient improvements, resulting in government skepticism and minimal commitments to strong sustainable consumption efforts.
In order to achieve sustainable consumption, two developments have to take place: an increase in the efficiency of consumption, and a change in consumption patterns and reductions in consumption levels in industrialized countries and rich social classes in developing countries which have a large ecological footprint and set an example for increasing middle classes in developing countries. The first prerequisite is not sufficient on its own and qualifies as weak sustainable consumption. Technological improvements and eco-efficiency support a reduction in resource consumption. Once this aim has been met, the second prerequisite, the change in patterns and reduction of levels of consumption is indispensable. Strong sustainable consumption approaches also pay attention to the social dimension of well-being and assess the need for changes based on a risk-averse perspective. In order to achieve strong sustainable consumption, changes in infrastructures as well as the choices customers have are required. In the political arena, weak sustainable consumption is more discussed.
The so-called attitude-behaviour or values-action gap describes an obstacle to changes in individual customer behavior. Many consumers are aware of the importance of their consumption choices and care about environmental issues, however most do not translate their concerns into their consumption patterns. This is because the purchase decision process is complicated and relies on e.g. social, political, and psychological factors. Young et al. identified a lack of time for research, high prices, a lack of information, and the cognitive effort needed as the main barriers when it comes to green consumption choices.
Historical related behaviors
In the early twentieth century, especially during the interwar period, families turned to sustainable consumption. When unemployment began to stretch resources, American working-class families increasingly became dependent on secondhand goods, such as clothing, tools, and furniture. Used items offered entry into consumer culture, and they also provided investment value and enhancements to wage-earning capabilities. The Great Depression saw increases in the number of families forced to turn to cast-off clothing. When wages became desperate, employers offered clothing replacements as a substitute for earnings. In response, fashion trends as high-end clothing became a luxury.
During the rapid expansion of post-war suburbia, families turned to new levels of mass consumption. Following the conference of 1956, plastic corporations were quick to enter the mass consumption market of post-war America. During this period companies like Dixie began to replace reusable products with disposable containers (plastic items and metals). Unaware of how to dispose of containers, consumers began to throw waste across public spaces and national parks. Following a Vermont State Legislature ban on disposable glass products, plastic corporations banded together to form the Keep America Beautiful organization in order to encourage individual actions and discourage regulation. The organization teamed with schools and government agencies to spread the anti-litter message. Running public service announcements like "Susan Spotless," the organization encouraged consumers to dispose waste in designated areas.
Culture shifts
Ecological awareness
There is a growing recognition that human well-being is interwoven with the natural environment, as well as an interest to change human activities that cause environmental harm. This is evident in the United Nations Paris Agreement goal of maintaining average global warming to optimistically 1.5 °C, and at least below a threshold of 2.0 °C. Western culture tends to celebrate consumer sovereignty and free market solutions to political economy problems. Yet climate change, and the associated tragedy of the global atmospheric commons, represent a large market failure. There are at least three options for achieving cultural shifts and greater ecological awareness. Private solutions labeled as Corporate Social Responsibility (CSR) strive to incorporate sustainability concerns into market supply and demand forces by increasing the transparency of productive processes, as well as awareness of ecological footprints of consumption. Public solutions apply regulatory frameworks such as the cap and trade system to reduce greenhouse gas emissions. An alternative approach adopts polycentric governance strategies across governmental institutions and non-governmental organizations to achieve greater citizen engagement and self-governance systems. Increasing levels of sustainable consumption to contribute to United Nations Sustainable Development Goal 12 will likely require supportive educational resources.
Surveys and trends
Surveys ranking consumer values such as environmental, social, and sustainability, showed sustainable consumption values to be particularly low. Surveys on environmental awareness saw an increase in perceived “eco-friendly” behavior. When tasked to reduce energy consumption, empirical research found that individuals are only willing to make minimal sacrifices and fail to reach strong sustainable consumption requirements. IGOs are not motivated to adopt sustainable policy decisions, since consumer demands may not meet the requirements of sustainable consumption.
Ethnographic research across Europe concluded that post-Financial crisis of 2007–2008 Ireland saw an increase in secondhand shopping and communal gardening. Following a series of financial scandals, Anti-Austerity became a cultural movement. Irish consumer confidence fell, sparking a cultural shift in second-hand markets and charities, stressing sustainability and drawing on .
Sustainable Development Goals
The Sustainable Development Goals were established by the United Nations in 2015. SDG 12 is meant to "ensure sustainable consumption and production patterns". Specifically, targets 12.1 and 12.A of SDG 12 aim to implement frameworks and support developing countries in order to "move towards more sustainable patterns of consumption and production".
Notable conferences and programs
1992—At the United Nations Conference on Environment and Development (UNCED) the concept of sustainable consumption was established in chapter 4 of the Agenda 21.
1995—Sustainable consumption was requested to be incorporated by UN Economic and Social Council (ECOSOC) into the UN Guidelines on Consumer Protection.
1997—A major report on SC was produced by the OECD.
1998—United Nations Environment Program (UNEP) started a SC program and SC is discussed in the Human Development Report of the UN Development Program (UNDP).
2002—A ten-year program on sustainable consumption and production (SCP) was created in the Plan of Implementation at the World Summit on Sustainable Development (WSSD) in Johannesburg.
2003—The "Marrakesh Process" was developed by co-ordination of a series of meetings and other "multi-stakeholder" processes by UNEP and UNDESA following the WSSD.
2018—Third International Conference of the Sustainable Consumption Research and Action Initiative (SCORAI) in collaboration with the Copenhagen Business School.
2022–Bologna, Italy conducts the first or one of the first trials of rewards for sustainable behavior that is not implemented via product prices or subsidy-like financial mechanisms in the EU: with a "Smart Citizen Wallet", described as a supermarket points-like system, citizens will have benefits if they for example use public transport and manage energy well.
See also
Choice editing
Collaborative consumption
Sustainable consumer behavior
Durable goods
Group decision-making
Product design
Overconsumption
References
External links
Consumption
Ethical consumerism
Environmental mitigation
Environmental social science concepts | Sustainable consumption | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,133 | [
"Environmental social science concepts",
"Environmental mitigation",
"Environmental social science",
"Environmental engineering"
] |
17,998,857 | https://en.wikipedia.org/wiki/E.%20coli%20long-term%20evolution%20experiment | The E. coli long-term evolution experiment (LTEE''') is an ongoing study in experimental evolution begun by Richard Lenski at the University of California, Irvine, carried on by Lenski and colleagues at Michigan State University, and currently overseen by Jeffrey Barrick at the University of Texas at Austin. It has been tracking genetic changes in 12 initially identical populations of asexual Escherichia coli bacteria since 24 February 1988. Lenski performed the 10,000th transfer of the experiment on March 13, 2017. The populations reached over 73,000 generations in early 2020, shortly before being frozen because of the COVID-19 pandemic. In September 2020, the LTEE experiment was resumed using the frozen stocks. When the populations reached 75,000 generations, the LTEE was transferred from the Lenski lab to the Barrick lab. In August 2024, the LTEE populations passed 80,000 generations in the Barrick lab.
Over the course of the experiment, Lenski and his colleagues have reported a wide array of phenotypic and genotypic changes in the evolving populations. These have included changes that have occurred in all 12 populations and others that have only appeared in one or a few populations. For example, all 12 populations showed a similar pattern of rapid improvement in fitness that decelerated over time, faster growth rates, and increased cell size. Half of the populations have evolved defects in DNA repair that have caused mutator phenotypes marked by elevated mutation rates. The most notable adaptation reported so far is the evolution of aerobic growth on citrate, which is unusual in E. coli, in one population at some point between generations 31,000 and 31,500. However, E. coli usually does grow on citrate in anaerobic conditions and has an active citric acid cycle which can metabolize citrate even under aerobic conditions. The aerobic event is mainly an issue of citrate being able to enter the cell.
On May 4, 2020, Lenski announced a five-year renewal of the grant through the National Science Foundation's Long-Term Research in Environmental Biology (LTREB) Program that supports the LTEE. He also announced that Dr. Jeffrey Barrick, an associate professor of Molecular Biosciences at The University of Texas at Austin, would take over supervision of the experiment within the five-year funding period. The experiment's time at Michigan State University ended in May 2022, when the populations reached 75,000 generations but the experiment was revived and restarted in Barrick's lab on June 21, 2022.
Experimental approach
The long-term evolution experiment was designed as an open-ended means of empirical examination of central features of evolution. The experiment was begun with three principal goals:
To examine the dynamics of evolution, including the rate of evolutionary change.
To examine the repeatability of evolution.
To better understand the relationship between change on the phenotypic and genotypic levels.
As the experiment has continued, its scope has grown as new questions in evolutionary biology have arisen that it can be used to address, as the populations' evolution has presented new phenomena to study, and as technology and methodological techniques have advanced.
The use of E. coli as the experimental organism has allowed many generations and large populations to be studied in a relatively short period of time. Moreover, due to the long use of E. coli as a principal model organism in molecular biology, a wide array of tools, protocols, and procedures were available for studying changes at the genetic, phenotypic, and physiological levels. The bacteria can also be frozen and preserved while remaining viable. This has permitted the creation of what Lenski describes as a "frozen fossil record" of samples of evolving populations that can be revived at any time. This frozen fossil record allows populations to be restarted in cases of contamination or other disruption in the experiment, and permits the isolation and comparison of living exemplars of ancestral and evolved clones. Lenski chose an E. coli strain that reproduces only asexually, lacks any plasmids that could permit bacterial conjugation, and has no viable prophage. As a consequence, evolution in the experiment occurs only by the core evolutionary processes of mutation, genetic drift, and natural selection. This strict asexuality also means that genetic markers persist in lineages and clades by common descent, but cannot otherwise spread in the populations.
Lenski chose to carry out the experiment with the bacteria grown in a glucose-limited medium called DM25, which is based on a minimal medium developed by Bernard Davis for use in isolating auxotrophic mutants of E. coli using penicillin as a selective agent. To make DM25, the minimal medium is supplemented with a low concentration (25 mg/L) of glucose. Lenski chose this concentration to simplify analysis of the populations' evolution by reducing clonal interference, in which multiple versions of alleles are competing in an evolving population, while also reducing the possibility of the evolution of ecological interactions. This concentration of glucose used supports a maximum population of 500 million cells of the ancestor in a 10 mL culture, though the maximum now varies among the evolved populations. DM25 also contains a large amount of citrate (about 11 times the concentration of glucose), which was originally included by Davis because it improved the killing efficiency of penicillin during his experiments, though it is now known to aid in E. colis acquisition of iron from the medium.
Methods
The 12 populations are maintained in a incubator in Lenski's laboratory at Michigan State University. Each day, 1% of each population is transferred to a flask of fresh DM25 growth medium. The dilution means that each population experiences 6.64 generations, or doublings, each day. Large, representative samples of each population are frozen with glycerol as a cryoprotectant at 500-generation (75-day) intervals. The bacteria in these samples remain viable, and can be revived at any time. This collection of samples is referred to as the "frozen fossil record", and provides a history of the evolution of each population through the entire experiment. The populations are also regularly screened for changes in mean fitness, and supplemental experiments are regularly performed to study interesting developments in the populations. , the E. coli populations have been under study for over 64,500 generations, and are thought to have undergone enough spontaneous mutations that every possible single point mutation in the E. coli genome has occurred multiple times.
Founding strain
The strain of E. coli Lenski chose to use in the long-term evolution experiment was derived from "strain B", as described in a 1966 paper by Seymour Lederberg (which incorrectly identified the strain as "Bc251", although later genetic analysis found it to be "B" instead), via Bruce Levin, who had used it in a bacterial ecology experiment in 1972. The defining genetic traits of this strain were: T6r, Strr, r−m−, Ara− (unable to grow on arabinose). Lenski designated the original founding strain as REL606. Before the beginning of the experiment, Lenski isolated an Ara+ variant of the strain in which a point mutation in the ara operon had restored growth on arabinose, which he designated as strain REL607. When beginning the long-term evolution experiment, Lenski founded six populations with six individual Ara− colonies of REL606. These populations are referred to as Ara-1 through Ara-6. Lenski also founded six more populations from six individual Ara+ colonies of REL607. These are referred to as populations Ara+1 through Ara+6. The marker differences permit strains to be differentiated on Tetrazolium Arabinose plates, on which Ara− colonies appear red, while Ara+ colonies appear white to pink. Over the course of the experiment, each population has accumulated a large number of distinct mutations, which permit further means of identifying strains by their population of origin.
Results
Changes in fitness
Much analysis of the experiment has dealt with how the fitness of the populations relative to their ancestral strain has changed. All populations showed a pattern of rapid increase in relative fitness during early generations, with this increase decelerating over time. By 20,000 generations the populations grew approximately 70% faster than the ancestral strain. This increase and deceleration in increase has continued in subsequent generations. A 2013 study by Wiser et al. reported ongoing improvement at 50,000 generations relative to samples isolated at 40,000 generations. They found that the fitness increase fit to a power law model much better than the hyperbolic models that had been used earlier. As a power law model describes an ever-slowing increase that has no upper limit, while a hyperbolic model implies a hard limit, the work suggested that the increase would continue without bound as progressively lower benefit mutations were fixed in the populations. Further work published in 2015 reported the results of over 1100 new fitness assays that examined fitness changes through 60,000 generations. The data once again fit the proposed power law model, and, indeed, fit within predictions of the model from earlier data. These results suggest that, contrary to previous thinking, adaptation and adaptive divergence can potentially increase indefinitely, even in a constant environment.
Genome evolution
Of the 12 populations, six have so far been reported to have developed defects in their ability to repair DNA, greatly increasing the rate of mutation in those strains. Although the bacteria in each population are thought to have generated hundreds of millions of mutations over the first 20,000 generations, Lenski has estimated that within this time frame, only 10 to 20 beneficial mutations achieved fixation in each population, with fewer than 100 total point mutations (including neutral mutations) reaching fixation in each population. In 2009, Barrick et al. reported the results of genome sequences from multiple time points in population Ara-1. They found that, unlike the declining rate of fitness improvement, mutation accumulation was linear and clock like, even though several lines of evidence suggested that much of the accumulation was beneficial, rather than neutral.
Evolution of increased cell size in all twelve populations
All twelve of the experimental populations show an increase in cell size concurrent with a decline in maximum population density, and in many of the populations, a more rounded cell shape. This change was partly the result of a mutation that changed the expression of a gene for a penicillin-binding protein, which allowed the mutant bacteria to outcompete ancestral bacteria under the conditions in the long-term evolution experiment. However, although this mutation increased fitness under these conditions, it also increased the bacteria's sensitivity to osmotic stress and decreased their ability to survive long periods in stationary phase cultures.
Ecological specialization
Over the course of the experiment, the populations have evolved to specialize on the glucose resource on which they grow. This was first described in 2000, when Cooper and Lenski demonstrated that all populations had experienced decay of unused metabolic functions after 20,000 generations, restricting the range of substances on which the bacteria could grow. Their analysis suggested that this decay was due to antagonistic pleiotropy, in which mutations that improved ability to grow on glucose had reduced or eliminated the ability to grow on other substances. A later study by Leiby and Marx that used more advanced techniques showed that much of the decay Cooper and Lenski had identified were experimental artifacts, that loss of unused functions was not as extensive as first thought, and that some unused functions had improved. Moreover, they concluded that the metabolic losses were not due to antagonistic pleiotropy, but the neutral accumulation of mutations in unused portions of the genome, suggesting that adaptation to a simple environment might not necessarily lead to specialization.
Evolution of balanced polymorphism and simple ecosystems
Two distinct variants, S and L, were identified in the population designated Ara-2 at 18,000 generations based on their formation of small and large colonies, respectively. Clones of the S and L types could co-exist stably in co-culture with each other, indicating they occupied distinct niches in the population. This was verified by the finding that the L type had an advantage during growth on glucose, but that S had an advantage during stationary phase, after glucose had run out. The two types were found to have initially evolved prior to 6,000 generations, and then co-existed thereafter. Phylogenetic analysis of clones of the two types isolated from different generations demonstrated that the S and L types belonged to distinct, co-existing lineages in the population, and might be undergoing incipient speciation.
Evidence of de novo gene birth
De novo gene birth is the process by which new genes arise by mutations that impact stretches of previously non-coding DNA. However, it is generally difficult to observe instances of gene birth. By analyzing the large collection of whole-genome sequences of E. coli clones sampled from the LTEE populations, a 2024 study discovered several possible instances of gene birth that involved the generation of novel mRNA transcripts and proteins associated with nearby mutations. The functional roles, if any, of these new proto-genes remain unknown.
Evolution of aerobic citrate usage in one population
BackgroundE. coli is normally unable to grow aerobically on citrate due to the inability to express a citrate transporter when oxygen is present. However, E. coli has a complete citric acid cycle, and therefore metabolizes citrate as an intermediate during aerobic growth on other substances, including glucose. Most E. coli can grow anaerobically on citrate via fermentation, if a co-substrate such as glucose is available to provide reducing power. The anaerobic growth is possible due to the expression of a transmembrane citrate-succinate antiporter gene, citT, which was first identified in 1998. This gene is co-regulated with other genes involved in citrate fermentation found on the cit operon, which is turned on only when oxygen is absent.
The inability to grow aerobically on citrate, referred to as a Cit− phenotype, is considered a defining characteristic of E. coli as a species, and one that has been a valuable means of differentiating E. coli from pathogenic Salmonella. Although Cit+ strains of E. coli have been isolated from environmental and agricultural samples, in every such case, the trait was found to be due to the presence of a plasmid that carries a foreign citrate transporter. A single, spontaneous Cit+ mutant of E. coli was reported by Hall in 1982. This mutant had been isolated during prolonged selection for growth on another novel substance in a growth broth that also contained citrate. Hall's genetic analysis indicated the underlying mutation was complex, but he was ultimately unable to identify the precise changes or genes involved, leading him to hypothesize activation of a cryptic transporter gene. The genome regions to which Hall was able to narrow down the locations of the changes do not correspond to the known location of the citT gene identified 16 years later, nor did the physiological characteristics in transport assays of Hall's Cit+ mutants match those to be expected for aerobic expression of the CitT transporter.
Cit+ evolves in the LTEE
In 2008, Lenski's team, led by Zachary D. Blount, reported that the ability to grow aerobically on citrate had evolved in one population. Around generation 33,127, a dramatic increase in turbidity was observed in the population designated Ara-3. They found that the population contained clones that were able to grow aerobically on citrate (Cit+). This metabolic capacity permitted the population to grow several-fold larger than it had previously, due to the large amount of citrate present in the medium. Examination of frozen fossil samples of the populations showed that Cit+ clones could be isolated as early as 31,500 generations. The Cit+ variants in the population were found to possess a number of genetic markers unique to the Ara-3 population; this observation excluded the possibility that the clones were contaminants, rather than spontaneous mutants. In a series of experiments that "replayed" the tape of Ara-3 evolution from Cit− clones isolated from samples frozen at various time points in the population's history, they demonstrated that the ability to grow aerobically on citrate was more likely to re-evolve in a subset of genetically pure, evolved clones. In these experiments, they observed 19 new, independent instances of Cit+ re-evolution, but only when starting from clones isolated from after generation 20,000. Fluctuation tests showed that clones from this generation and later displayed a rate of mutation to the Cit+ trait which was significantly higher than the ancestral rate. Even in these later clones, the rate of mutation to Cit+ was on the order of one occurrence per trillion cell divisions.
Lenski and his colleagues concluded that the evolution of the Cit+ function in this one population arose due to one or more earlier, possibly nonadaptive, "potentiating" mutations that increased the rate of mutation to an accessible level. The data suggested that citrate usage involved at least two mutations subsequent to these "potentiating" mutations. More generally, the authors suggest these results indicate, following the argument of Stephen Jay Gould, "that historical contingency can have a profound and lasting impact" on the course of evolution. These findings have come to be considered a significant instance of the impact of historical contingency on evolution.
Genomic analysis of the Cit+ trait and implications for evolutionary innovation
In 2012, Lenski and his team reported the results of a genomic analysis of the Cit+ trait that shed light on the genetic basis and evolutionary history of the trait. The researchers had sequenced the entire genomes of twenty-nine clones isolated from various time points in the Ara-3 population's history. They used these sequences to reconstruct the phylogenetic history of the population; this reconstruction showed that the population had diversified into three clades by 20,000 generations. The Cit+ variants had evolved in one of these, which they called Clade 3. Clones that had been found to be potentiated in earlier research were distributed among all three clades, but were over-represented in Clade 3. This led the researchers to conclude that there had been at least two potentiating mutations involved in Cit+ evolution.
The researchers also found that all Cit+ clones had mutations in which a 2933-base-pair segment of DNA was duplicated or amplified. The duplicated segment contained the gene citT for the citrate transporter protein used in anaerobic growth on citrate. The duplication is tandem, and resulted in copies that were head-to-tail with respect to each other. This new configuration placed a copy of the previously silent, unexpressed citT under the control of the adjacent rnk gene's promoter, which directs expression when oxygen is present. This new rnk-citT module produced a novel regulatory pattern for citT, activating expression of the citrate transporter when oxygen was present, and thereby enabled aerobic growth on citrate.
Movement of this rnk-citT module into the genome of a potentiated Cit− clone was shown to be sufficient to produce a Cit+ phenotype. However, the initial Cit+ phenotype conferred by the duplication was very weak, and only granted a ~1% fitness benefit. The researchers found that the number of copies of the rnk-citT module had to be increased to strengthen the Cit+ trait sufficiently to permit the bacteria to grow well on the citrate. Further mutations after the Cit+ bacteria became dominant in the population continued to accumulate improved growth on citrate.
The researchers concluded that the evolution of the Cit+ trait occurred in three distinct phases: (1) mutations accumulated that increased the rate of mutation to Cit+, (2) the trait itself appeared in a weak form, and (3) the trait was improved by later mutations. Blount et al. suggested that this pattern might be typical of how novel traits in general evolve, and proposed a three-step model of evolutionary innovation:
Potentiation: a genetic background evolves in which a trait is mutationally accessible, making the trait's evolution possible.
Actualization: a mutation occurs that produces the trait, making it manifest, albeit likely in a weak form.
Refinement: Once the trait exists, if it provides selective benefit, mutations will accumulate that improve the trait, making it effective. This phase is open-ended, and will continue so long as refining mutations arise and the trait remains beneficial.
This model has seen acceptance in evolutionary biology. In 2015 paleontologist Douglas Erwin suggested a modification to a four-step model to better reflect a possible distinction between evolutionary novelty and evolutionary innovation, and to highlight the importance of environmental conditions: potentiation, generation of novel phenotypes (actualization), adaptive refinement, and exploitation (conversion of a novelty to an innovation as it becomes important for the ecological establishment of possessing organisms).
Investigation of potentiation
In 2014, a research team led by Eric Quandt in the lab of Jeffrey Barrick at the University of Texas at Austin described the application of a new technique called Recursive Genomewide Recombination and Sequencing (REGRES) to identify potentiating mutations among the 70 present in the Ara-3 lineage that evolved Cit+. This method used multiple rounds of a process in which F plasmid based conjugation between a 33,000 generation Cit+ clone, CZB154, and the Cit− founding clone of the LTEE to purge mutations not required for either manifestation of a weak or strong form of the Cit+ trait, the latter referred to as Cit++. They found that the rnk-citT module responsible for the phenotypic switch to Cit+ was sufficient to produce a weak Cit+ phenotype in the ancestor. They also identified a mutation that had occurred in the lineage leading to CZB154 after the initial evolution of Cit+ that conferred a strong, Cit++ phenotype in the ancestor absent any mutation but the rnk-citT module. This mutation, found in the regulatory region of a gene called dctA, caused a massive increase in the expression of the DctA transporter, which functions to import C4-dicarboxylates into the cell. This increased DctA expression, they found, permitted Cit+ cells to re-uptake succinate, malate, and fumarate released into the medium by the CitT transporter during import of citrate. They identified a similar mutation in Cit++ clones in the Ara-3 population that increased DctA expression by restoring function to a gene that regulates it, dcuS, that had been deactivated in the ancestral clone. Quandt et al. concluded that the dctA mutation was not involved in potentiation, but refinement. This led them to suggest that evolution of Cit+ in the Ara-3 population might have been contingent upon a genetic background and population-specific ecology that permitted the early, weak Cit+ variants to persist in the population long enough for refining mutations to arise and render growth on citrate strong enough to provide a significant fitness benefit.
Quandt and colleagues later published findings definitively identifying a mutation that did potentiate Cit+ evolution. This mutation was in the gltA gene, which encodes citrate synthase, an enzyme involved in the flow of carbon into the citric acid cycle. It had the effect of increasing citrate synthase activity, and they showed that it permitted improved growth on acetate. Moreover, with the gltA mutation, the rnk-citT module that causes the Cit+ trait has a neutral-to-slightly beneficial fitness effect, while, without it, the module was strongly detrimental. The gltA mutation therefore seems to have permitted early, weak Cit+ variants to persist in the population until later refining mutations could occur, consistent with their earlier conclusions. After a strong Cit++ phenotype evolved, the increased citrate synthase activity became detrimental. The researchers found that later mutations in gltA countered the first mutation, reducing citrate synthase activity, and further improving growth on citrate. They concluded that the series of mutations in gltA first potentiated, and then refined growth on citrate. They also suggested that the lineage in which Cit+ arose might have occupied a niche in Ara-3 based on growth on acetate, and that the potentiating mutations that led to evolution of Cit+ in Ara-3 were originally adaptive for acetate use.
Investigation of post-Cit+ ecology and persistent diversity
A small subpopulation of Cit− cells unable to grow on citrate, and belonging to a separate clade persisted in the population after the Cit+ cells became dominant. Early findings showed that this diversity was partly due to the Cit− cells being better at growing on the glucose in the medium. Turner et al. later found that another factor behind the coexistence was that the Cit− cells evolved the ability to cross feed on the Cit+ majority. They showed that the Cit+ cells release succinate, malate, and fumarate during growth on citrate, as the CitT transporter pumps these substances out of the cell while pumping citrate into the cell. The Cit− cells had rapidly evolved the ability to grow on these substances due to a mutation that restored expression of an appropriate transporter protein that was silent in the ancestor.
The Cit− subpopulation eventually went extinct in the population between 43,500 and 44,000 generations. This extinction was shown to not be due to the Cit+ majority evolving to be able to invade the niche occupied by the Cit− minority. Indeed, Cit− clones could invade Cit+ populations from after the extinction event. Moreover, in an experiment in which they restarted twenty replicates of the Ara-3 population from the sample frozen 500 generations before the extinction, Turner et al. found that the Cit− subpopulation had not gone extinct in any of the replicates after 500 generations of evolution. One of these replicates was continued for 2,500 generations, over which Cit− continued to coexist. The researchers concluded that the extinction of Cit− had been due to some unknown "rare environmental perturbation", similar to that which can impact natural populations. The final replicate was integrated into the main LTEE experiment, becoming the thirteenth population, Ara-7.
Various interpretations of the findings
Barry Hall had already isolated a mutant strain of aerobic citrate-utilizing E. coli in 1982 and he attributed it to two mutations in genes citA and citB, which are linked to the gal operon. Some have contrasted Hall's findings as unintended “direct selection” for Cit+ mutants and Lenski's findings as an unintended genetic “screen” for Cit+ mutants. Other researchers have experimented on evolving aerobic citrate-utilizing E. coli. Dustin Van Hofwegen et al. were able to isolate 46 independent citrate-utilizing mutants of E. coli in just 12 to 100 generations using highly prolonged selection under starvation, during which the bacteria would sample more mutations more rapidly. In their research, the genomic DNA sequencing revealed an amplification of the citT and dctA loci, and rearrangement of DNA were the same class of mutations identified in the experiment by Richard Lenski and his team. They concluded that the rarity of the citrate-utilizing mutant in Lenski's research was likely a result of the selective experimental conditions used by his team rather than being a unique evolutionary speciation event.
John Roth and Sophie Maisnier-Patin reviewed the approaches in both the Lenski team's delayed mutations and the Van Hofwegen team's rapid mutations on E. coli. They argue that both teams observed the same sequence of potentiation, actualization, and refinement leading up to similar Cit+ variants. According to them, Lenski's period of less than a day during which citrate usage would be under selection, followed by 100-fold dilution, and a period of growth on glucose that would not select for citrate use; ultimately lowered the probability of E. coli being able to accumulate early adaptive mutations from one period of selection to the next. In contrast, Van Hofwegen's team allowed for a continuous selection period of 7 days, which yielded a more rapid development of citrate-using E. coli. Roth and Maisnier-Patin suggest that the serial dilution of E. coli and short period of selection for citrate-use under the conditions of the LTEE perpetually impeded each generation of E. coli from reaching the next stages of aerobic citrate utilization.
Lenski argues that the problem is not with the experiments or the data, but with the interpretations made by Van Hofwegen et al. and Maisnier-Patin and Roth. According to him, the rapid evolution of Cit+ was not necessarily unexpected since his team was also able to produce multiple Cit+ mutants in a few weeks during the replay experiments. He argues that the LTEE was not designed to isolate citrate-using mutants or to deal with speciation, which is a process, not an event. Furthermore, he argued that the evolution of Cit+ in the LTEE was contingent upon mutations that had accumulated earlier.
See also
Experimental evolution
Long-term experiment
References
Further reading
External links
E. coli Long-term Experimental Evolution Project Site
Bacteria make major evolutionary shift in the lab Bob Holmes New Scientist'' 9 June 2008
Evolution: Past, Present and Future Richard Lenski
List of publications on the experiment
Online Publication of paper on Rapid evolution of citrate utilization
1988 in biology
Biology experiments
Escherichia coli
Evolutionary biology
Molecular evolution | E. coli long-term evolution experiment | [
"Chemistry",
"Biology"
] | 6,218 | [
"Evolutionary biology",
"Evolutionary processes",
"Molecular evolution",
"Model organisms",
"Molecular biology",
"Escherichia coli"
] |
18,001,499 | https://en.wikipedia.org/wiki/Bayesian%20efficiency | Bayesian efficiency is an analog of Pareto efficiency for situations in which there is incomplete information. Under Pareto efficiency, an allocation of a resource is Pareto efficient if there is no other allocation of that resource that makes no one worse off while making some agents strictly better off. A limitation with the concept of Pareto efficiency is that it assumes that knowledge about other market participants is available to all participants, in that every player knows the payoffs and strategies available to other players so as to have complete information. Often, the players have types that are hidden from the other player.
Overview
The lack of complete information raises a question of when the efficiency calculation should be made. Should the efficiency check be made at the ex ante stage before the agent sees their types, at the interim stage after the agent sees their types, or at the ex post stage where the agent will have complete information about their types? Another issue is incentive. If a resource allocation rule is efficient but there is no incentive to abide by that rule or accept that rule, then the revelation principle asserts that there is no mechanism by which this allocation rule can be realized.
Bayesian efficiency overcomes problems of the Pareto efficiency by accounting for incomplete information, by addressing the timing of the evaluation (ex ante efficient, interim efficient, or ex post efficient), and by adding an incentive qualifier so that the allocation rule is incentive compatible.
Bayesian efficiency separately defines three types of efficiency: ex ante, interim, and ex post. For an allocation rule :
Ex ante efficiency: is incentive compatible, and there exists no incentive compatible allocation rule that
for all , with strict inequality for some .
Interim efficiency: is incentive compatible, and there exists no incentive compatible allocation rule that
for all and , with strict inequality for some and .
Ex post efficiency: is incentive compatible, and there exists no incentive compatible allocation rule that
for all , with strict inequality for some .
Here, are beliefs, are utility functions, and are agents. An ex ante efficient allocation is always interim and ex post efficient, and an interim efficient allocation is always ex post efficient.
References
Bayesian statistics
Pareto efficiency
Game theory
Law and economics
Mathematical optimization
Optimal decisions
Electoral system criteria
Welfare economics | Bayesian efficiency | [
"Mathematics"
] | 448 | [
"Mathematical optimization",
"Mathematical analysis",
"Game theory"
] |
1,711,063 | https://en.wikipedia.org/wiki/Aerodynamic%20heating | Aerodynamic heating is the heating of a solid body produced by its high-speed passage through air. In science and engineering, an understanding of aerodynamic heating is necessary for predicting the behaviour of meteoroids which enter the Earth's atmosphere, to ensure spacecraft safely survive atmospheric reentry, and for the design of high-speed aircraft and missiles.
"For high speed aircraft and missiles aerodynamic heating is the conversion of kinetic energy into heat energy as a result of their relative motion in stationary air and the subsequent transfer through the skin into the structure and interior of the vehicle. Some heat is produced by fluid compression at and near stagnation points such as the vehicle nose and wing leading edges. Additional heat is generated from air friction along the skin inside the boundary layer". These two regions of skin heating are shown by van Driest. Boundary layer heating of the skin may be known as kinetic heating.
The effect of skin heating on aircraft wing design
The effects of aerodynamic heating on the temperature of the skin, and subsequent heat transfer into the structure, the cabin, the equipment bays and the electrical, hydraulic and fuel systems, have to be incorporated in the design of supersonic and hypersonic aircraft and missiles.
One of the main concerns caused by aerodynamic heating arises in the design of the wing. For subsonic speeds, two main goals of wing design are minimizing weight and maximizing strength. Aerodynamic heating, which occurs at supersonic and hypersonic speeds, adds an additional consideration in wing structure analysis. An idealized wing structure is made up of spars, stringers, and skin segments. In a wing that normally experiences subsonic speeds, there must be a sufficient number of stringers to withstand the axial and bending stresses induced by the lift force acting on the wing. In addition, the distance between the stringers must be small enough that the skin panels do not buckle, and the panels must be thick enough to withstand the shear stress and shear flow present in the panels due to the lifting force on the wing. However, the weight of the wing must be made as small as possible, so the choice of material for the stringers and the skin is an important factor.
At supersonic speeds, aerodynamic heating adds another element to this structural analysis. At normal speeds, spars and stringers experience a load which is a function of the lift force, first and second moments of inertia, and length of the spar. When there are more spars and stringers, the load in each member is reduced, and the area of the stringer can be reduced to meet critical stress requirements. However, the increase in temperature caused by energy flowing from the air (heated by skin friction at these high speeds) adds another load factor, called a thermal load, to the spars. This thermal load increases the force felt by the stringers, and thus the area of the stringers must be increased in order for the critical stress requirement to be met.
Another issue that aerodynamic heating causes for aircraft design is the effect of high temperatures on common material properties. Common materials used in aircraft wing design, such as aluminum and steel, experience a decrease in strength as temperatures get extremely high. The Young's Modulus of the material, defined as the ratio between stress and strain experienced by the material, decreases as the temperature increases. Young's Modulus is critical in the selection of materials for wing, as a higher value lets the material resist the yield and shear stress caused by the lift and thermal loads. This is because Young's Modulus is an important factor in the equations for calculating the critical buckling load for axial members and the critical buckling shear stress for skin panels. If the Young's Modulus of the material decreases at high temperatures caused by aerodynamic heating, then the wing design will call for larger spars and thicker skin segments in order to account for this decrease in strength as the aircraft goes supersonic. There are some materials that retain their strength at the high temperatures that aerodynamic heating induces. For example, Inconel X-750 was used on parts of the airframe of the X-15, a North American aircraft that flew at hypersonic speeds in 1958. Titanium is another high-strength material, even at high temperatures, and is often used for wing frames of supersonic aircraft. The SR-71 used titanium skin panels painted black to reduce the temperature and corrugated to accommodate expansion. Another important design concept for early supersonic aircraft wings was using a small thickness-to-chord ratio, so that the speed of the flow over the airfoil does not increase too much from the free stream speed. As the flow is already supersonic, increasing the speed even more would not be beneficial for the wing structure. Reducing the thickness of the wing brings the top and bottom stringers closer together, reducing the total moment of inertia of the structure. This increases axial load in the stringers, and thus the area, and weight, of the stringers must be increased. Some designs for hypersonic missiles have used liquid cooling of the leading edges (usually the fuel en route to the engine). The Sprint missile's heat shield needed several design iterations for Mach 10 temperatures.
Reentry vehicles
Heating caused by the very high reentry speeds (greater than Mach 20) is sufficient to destroy the vehicle unless special techniques are used. The early space capsules such as used on Mercury, Gemini, and Apollo were given blunt shapes to produce a stand-off bow shock, allowing most of the heat to dissipate into the surrounding air. Additionally, these vehicles had ablative material that sublimates into a gas at high temperature. The act of sublimation absorbs the thermal energy from the aerodynamic heating and erodes the material rather than heating the capsule. The surface of the heat shield for the Mercury spacecraft had a coating of aluminium with glassfiber in many layers. As the temperature rose to the layers would evaporate and take the heat with it. The spacecraft would become hot but not harmfully so. The Space Shuttle used insulating tiles on its lower surface to absorb and radiate heat while preventing conduction to the aluminium airframe. Damage to the heat shield during liftoff of Space Shuttle Columbia contributed to its destruction upon reentry.
See also
Thermal velocity
References
Further reading
Moore, F.G., Approximate Methods for Weapon Aerodynamics, AIAA Progress in Astronautics and Aeronautics, Volume 186
Chapman, A.J., Heat Transfer, Third Edition, Macmillan Publishing Company, 1974
Bell Laboratories R&D, ABM Research and Development At Bell Laboratories, 1974. Stanley R. Mickelsen Safeguard Complex
Atmospheric entry
Heat transfer
Aerospace engineering
Aerodynamics | Aerodynamic heating | [
"Physics",
"Chemistry",
"Engineering"
] | 1,352 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Aerodynamics",
"Atmospheric entry",
"Thermodynamics",
"Aerospace engineering",
"Fluid dynamics"
] |
1,711,423 | https://en.wikipedia.org/wiki/Wurtz%20reaction | In organic chemistry, the Wurtz reaction, named after Charles Adolphe Wurtz, is a coupling reaction in which two alkyl halides are treated with sodium metal to form a higher alkane.
2 R−X + 2 Na → R−R + 2 NaX
The reaction is of little value except for intramolecular versions, such as 1,6-dibromohexane + 2 Na → cyclohexane + 2 NaBr.
A related reaction, which combines alkyl halides with aryl halides is called the Wurtz–Fittig reaction. Despite its very modest utility, the Wurtz reaction is widely cited as representative of reductive coupling.
Mechanism
The reaction proceeds by an initial metal–halogen exchange, which is described with the following idealized stoichiometry:
R−X + 2 M → RM + MX
This step may involve the intermediacy of radical species R·. The conversion resembles the formation of a Grignard reagent. The RM intermediates have been isolated in several cases. The radical is susceptible to diverse reactions. The organometallic intermediate (RM) next reacts with the alkyl halide (RX) forming a new carbon–carbon covalent bond.
RM + RX → R−R + MX
The process resembles an SN2 reaction, but the mechanism is probably complex.
Examples and reaction conditions
The reaction is intolerant of many functional groups which would be attacked by sodium. For similar reasons, the reaction is conducted in unreactive solvents such as ethers. In efforts to improve the reaction yields, other metals have also been tested to effect the Wurtz-like couplings: silver, zinc, iron, activated copper, indium, as well as mixture of manganese and copper chloride.
Wurtz coupling is useful in closing small, especially three-membered, rings. In the cases of 1,3-, 1,4-, 1,5-, and 1,6- dihalides, Wurtz-reaction conditions lead to formation of cyclic products, although yields are variable. Under Wurtz conditions, vicinal dihalides yield alkenes, whereas geminal dihalides convert to alkynes. Bicyclobutane was prepared this way from 1-bromo-3-chlorocyclobutane in 95% yield. The reaction is conducted in refluxing dioxane, at which temperature, the sodium is liquid.
Extensions to main group compounds
Although the Wurtz reaction is only of limited value in organic synthesis, analogous couplings are useful for coupling main group halides. Hexamethyldisilane arises efficiently by treatment of trimethylsilyl chloride with sodium:
Tetraphenyldiphosphine is prepared analogously:
Similar couplings have been applied to many main group halides. When applied to main group dihalides, rings and polymers result. Polysilanes and polystannanes are produced in this way
See also
Wurtz–Fittig reaction
Ullmann reaction
Further reading
Organic Chemistry Portal, organic-chemistry.org
Organic Chemistry, by Morrison and Boyd
Organic Chemistry, by Graham Solomons and Craig Fryhle, Wiley Publications
References
Condensation reactions
Carbon-carbon bond forming reactions
Name reactions | Wurtz reaction | [
"Chemistry"
] | 693 | [
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Name reactions",
"Condensation reactions"
] |
1,711,465 | https://en.wikipedia.org/wiki/Coupling%20reaction | In organic chemistry, a coupling reaction is a type of reaction in which two reactant molecules are bonded together. Such reactions often require the aid of a metal catalyst. In one important reaction type, a main group organometallic compound of the type R-M (where R = organic group, M = main group centre metal atom) reacts with an organic halide of the type R'-X with formation of a new carbon-carbon bond in the product R-R'. The most common type of coupling reaction is the cross coupling reaction.
Richard F. Heck, Ei-ichi Negishi, and Akira Suzuki were awarded the 2010 Nobel Prize in Chemistry for developing palladium-catalyzed cross coupling reactions.
Broadly speaking, two types of coupling reactions are recognized:
Homocouplings joining two identical partners. The product is symmetrical
Heterocouplings joining two different partners. These reactions are also called cross-coupling reactions. The product is unsymmetrical, .
Homo-coupling types
Coupling reactions are illustrated by the Ullmann reaction:
Cross-coupling types
Applications
Coupling reactions are routinely employed in the preparation of pharmaceuticals. Conjugated polymers are prepared using this technology as well.
References
Organometallic chemistry
Carbon-carbon bond forming reactions
Catalysis | Coupling reaction | [
"Chemistry"
] | 263 | [
"Catalysis",
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Chemical kinetics",
"Organometallic chemistry"
] |
1,711,590 | https://en.wikipedia.org/wiki/National%20Renewable%20Energy%20Laboratory | The National Renewable Energy Laboratory (NREL) in the US specializes in the research and development of renewable energy, energy efficiency, energy systems integration, and sustainable transportation. NREL is a federally funded research and development center sponsored by the Department of Energy and operated by the Alliance for Sustainable Energy, a joint venture between MRIGlobal and Battelle. Located in Golden, Colorado, NREL is home to the National Center for Photovoltaics, the National Bioenergy Center, and the National Wind Technology Center.
History
Establishment
In the 1973 oil crisis, the energy prices skyrocketed. It caused widespread issue in other aspects, such as food, inflation. President Gerald Ford openly recognized the issue at the 1974 World Energy Conference in Detroit. A month later, the Solar Energy Research, Development and Demonstration Act of 1974 was signed. Section 10 of the bill explicitly outlined the establishment of the Solar Energy Research Institute, which opened in 1977 and was operated by Midwest Research Institute. Paul Rappaport was the founding director. It is the first time a national-scale effort had ever been made to advance solar power.
Before 1991
SERI's activities went beyond research and development in solar energy as it tried to popularize knowledge about already existing technologies, like biomass conversion,passive solar, and energy storage. On first year, thin-film solar cells achieved 10% efficiency. Next year, the Jimmy Carter administration passed the Solar Photovoltaic Energy Research, Development, and Demonstration Act of 1978. However, by 1978, the national effort for an alternative energy source turned towards nuclear energy. Then, the Three Mile Island accident occurred, and the passion for clean energy is renewed.
1991 - present
In September 1991, the institute was designated a national laboratory of the U.S. Department of Energy by President George H.W. Bush, and its name was changed to the National Renewable Energy Laboratory.
Renewed interest in energy problems improved the laboratory's position, but funding has fluctuated over the years. In 2011, anticipated congressional budget shortfalls led to a voluntary buyout program for 100 to 150 staff reductions, and in 2015 budget cuts led to staff layoffs and further buyouts.
Martin Keller became NREL's ninth director in November 2015, and currently serves as both the director of the laboratory and the president of its operating contractor, Alliance for Sustainable Energy, LLC. He succeeded Dan Arvizu, who retired in September 2015 after 10 years in those roles.
Department of Energy funding
In fiscal year 2020, congressional appropriations for the Department of Energy contained $464.3 million for NREL. This total included the following amounts for its renewable energy technology programs:
Solar energy: $122.4 million
Wind power: $30.0 million
Bioenergy: $56.3 million
Hydrogen and fuel cells: $17.6 million
Geothermal: $1.8 million
Water power: $15.8 million
Commercialization and technology transfer
The National Renewable Energy Laboratory (NREL) engages in technology transfer, working with private sector partners to facilitate the application of research in renewable energy and energy efficiency technologies in practical settings.
In recognition of its efforts in innovation and technology transfer, NREL has received numerous R&D 100 Awards. These awards acknowledge advancements in scientific research with potential market applications. Additionally, NREL offers an external user access program. This program is designed to enable researchers from outside the laboratory to utilize the Energy Systems Integration Facility (ESIF), providing them with an opportunity to collaborate with NREL’s staff in the development and evaluation of energy technologies.
National Center for Photovoltaics
The goal of the photovoltaics (PV) research done at NREL is to decrease the "nation's reliance on fossil-fuel generated electricity by lowering the cost of delivered electricity and improving the efficiency of PV modules and systems."
Photovoltaic research at NREL is performed under the National Center for Photovoltaics (NCPV). A primary mission of the NCPV is to support ongoing efforts of the DOE's SunShot Initiative, which wants to increase the availability of solar power at a cost competitive with other energy sources. The NCPV coordinates its research and goals with researchers from across the country, including the Quantum Energy and Sustainable Solar Technologies (QESST) Center and the Bay Area PV Consortium. NCPV also partners with many universities and other industry partners. NREL brings in dozens of students annually through the Solar University-National lab Ultra-effective Program (SUN UP), which was created to facilitate existing and new interactions between universities and the laboratory.
The lab maintains a number of research partnerships for PV research.
Research and development
Some of the areas of PV R&D include the physical properties of PV panels, performance and reliability of PV, junction formation, and research into photo-electrochemical materials.
Through this research, NREL hopes to surpass current technologies in efficiency and cost-competitiveness and reach the overall goal of generating electricity at $0.06/kWh for grid-tied PV systems.
NREL identifies the following as cornerstones to its PV R&D program: the Thin-Film Partnership and the PV Manufacturing R&D Project.
The Thin Film Partnership Program at NREL coordinates national research teams of manufacturers, academics, and NREL scientists on a variety of subjects relating to thin-film PV. The research areas of the Thin Film Partnership Program include amorphous silicon (a-Si), copper indium diselenide (CuInSe2 or CIGS) and, cadmium telluride (CdTe), and module reliability.
NREL's PV Manufacturing Research and Development Project is an ongoing partnership between NREL and private sector solar manufacturing companies. It started in 1991 as the Photovoltaic Manufacturing Technology (PVMaT) project and was extended and renamed in 2001 due to its success as a project. The overall goal of research done under the PV Manufacturing R&D Project is to help maintain a strong market position for US solar companies by researching ways to reduce costs to manufacturers and customers and improving the manufacturing process. It is estimated that the project has helped to reduce manufacturing cost for PV panels by more than 50%.
Examples of achievements under the PV Manufacturing Research and Development Project include the development of a manufacturing process that increase the production of silicon solar modules by 8% without increasing costs and the development of a new boron coating process that reduces solar costs over traditional processes.
Testing
NREL is capable of providing testing and evaluation to the PV industry with indoor, outdoor, and field testing facilities. NREL is able to provide testing on long-term performance, reliability, and component failure for PV systems. NREL also has accelerated testing capabilities from both PV cells and system components to identify areas of potential long-term degradation and failure. The Photovoltaic Device Performance group at NREL is able to measure the performance of PV cells and modules with regard to a standard or customized reference set. This allows NREL to serve as independent facility for verifying device performance. NREL allows industry members to test and evaluate potential products, with the hope that it will lead to more cost effective and reliable technology. The overall goal is to help improve the reliability in the PV industry.
Deployment
NREL also seeks to raise public awareness of PV technologies through its deployment services. NREL provides a number of technical and non-technical publications intended to help raise consumer awareness and understanding of solar PV. Scientists at NREL perform research into energy markets and how to develop the solar energy market. They also perform research and outreach in the area of building-integrated PV. NREL is also an active organizer and sponsor in the DOE's Solar Decathlon.
NREL provides information on solar energy, beyond the scientific papers on research done at the lab. The lab provides publications on solar resources and manuals on different applications of solar technology, as well as a number of different solar resource models and tools. The lab also makes available a number of different solar resource data sets in its Renewable Resource Data Center.
Facilities
NREL's Golden, Colorado campus houses several facilities dedicated to PV and biomass research. In the recently opened Science and Technology Facility, research is conducted on solar cells, thin films, and nanostructure research. NREL's Outdoor Test Facility allows researchers to test and evaluate PV technologies under a range of conditions, both indoor and outdoor. Scientists at NREL work at the Outdoor Test Facility to develop standards for testing PV technologies. At the Outdoor Test Facility NREL researchers calibrate primary reference cells for use in a range of applications. One of the main buildings for PV research at NREL is the Solar Energy Research Facility (SERF). Examples of research conducted at the SERF include semiconductor material research, prototype solar cell production, and measurement and characterization of solar cell and module performance. Additionally, the roof at the SERF is able to house ten PV panels to evaluate and analyze the performance of commercial building-integrated PV systems. Additionally, R&D in PV materials and devices, measurement and characterization, reliability testing are also conducted at the SERF. At the Solar Radiation Research Laboratory, NREL has been measuring solar radiation and meteorological data since 1984.
National Bioenergy Center
The National Bioenergy Center (NBC) was established in October 2000. "The National Bioenergy Center is composed of four technical groups and a technical lead for partnership development with industry. Partnership development includes work performed at NREL under Cooperative Research and Development Agreements (CRADA), Technical Service Agreements (TSA), Analytical Service Agreements (ASA), and Work for Others (WFO) contract research for DOE's industry partners."
The main focus of the research is to convert biomass into biofuels/biochemical intermediates via both biochemical and thermochemical processes.
The National Bioenergy Center is currently divided into certain technology and research areas:
Applied Science
Catalysis and Thermochemical Sciences and Engineering R&D
Biochemical Process R&D
Biorefinery Analysis
Some of the current projects are in the following areas:
Biomass characteristics
Biochemical conversion
Thermochemical conversion
Chemical and catalyst science
Integrated biorefinery processes
Microalgal biofuels
Biomass process and sustainability analysis
The Integrated Biorefinery Research Facility (IBRF) houses multiple pilot-scale process trains for converting biomass to various liquid fuels at a rate of 450–900 kg (0.5–1 ton) per day of dry biomass. Unit operations include feedstock washing and milling, pretreatment, enzymatic hydrolysis, fermentation, distillation, and solid-liquid separation. The heart of the Thermochemical Users Facility (TCUF) is the 0.5-metric-ton-per-day Thermochemical Process Development Unit (TCPDU), which can be operated in either a pyrolysis or gasification mode.
National Wind Technology Center
NREL has produced many technologies that impact the wind industry at a global level. The National Wind Technology Center (NWTC) is home of 20 patents and has created software such as (FAST), simulation software that is used to model wind turbines.
The NWTC is located on NREL's Flatirons Campus, which is at the base of the foothills just south of Boulder, Colorado. The campus comprises field test sites, test laboratories, industrial high-bay work areas, machine shops, electronics and instrumentation laboratories, and office areas.
The NWTC is also home to NREL's Distributed Energy Resources Test Facility (DERTF). The DERTF is a working laboratory for interconnection and systems integration testing. This facility includes generation, storage, and interconnection technologies as well as electric power system equipment capable of simulating a real-world electric system.
The center is the first facility in the United States with a controllable grid interface test system that has fault simulation capabilities and allows manufacturers and system operators to conduct the tests required for certification in a controlled laboratory environment. It is the only system in the world that is fully integrated with two dynamometers and has the capacity to extend that integration to turbines in the field and to a matrix of electronic and mechanical storage devices, all of which are located within close proximity on the same site.
Sustainable transportation and mobility research
NREL pioneers world-class research accelerating the development of sustainable mobility technologies and strategies for passenger and freight transportation, with a focus on decarbonizing the transportation sector and combating climate change. The only national laboratory solely dedicated to energy efficiency and renewable energy, NREL helps its industry partners create innovative components, fuels, infrastructure, and integrated systems for battery electric, fuel cell, and other alternative fuel on-road, off-road, and non-road vehicles, including emerging technologies for aviation, rail, and marine applications.
NREL's integrated modeling and analysis tools help overcome technical barriers and accelerate the development of advanced transportation technologies and systems that maximize energy savings and on-road performance.
Transportation and mobility research areas
Commercial vehicle technologies
Transportation decarbonization
Electric vehicle grid integration
Energy storage
Fuels and combustion
Intelligent vehicle energy analysis
Mobility behavioral science
Power electronics and electric machines
Sustainable aviation
Sustainable mobility
Vehicle technology integration
Vehicle thermal management
See also
List of renewable energy organizations
Renewable energy
Renewable energy commercialization in the United States
Simple Model of the Atmospheric Radiative Transfer of Sunshine (SMARTS), software published by NREL
Notes
References
Further reading
External links
The Internet of Things (IoT) is a revolutionary concept that interconnects everyday objects and devices through the Internet. By embedding sensors and communication capabilities into these physical entities, the IoT enables data collection, analysis, and real-time communication, ushering in a new era of efficiency, automation, and enhanced user experiences.
Energy research institutes
Renewable energy organizations based in the United States
United States Department of Energy national laboratories
Federally Funded Research and Development Centers
Buildings and structures in Golden, Colorado
Research institutes in Colorado
Battelle Memorial Institute
Golden, Colorado
1974 establishments in Colorado | National Renewable Energy Laboratory | [
"Engineering"
] | 2,845 | [
"Energy research institutes",
"Energy organizations"
] |
1,714,290 | https://en.wikipedia.org/wiki/Neighbourhood%20system | In topology and related areas of mathematics, the neighbourhood system, complete system of neighbourhoods, or neighbourhood filter for a point in a topological space is the collection of all neighbourhoods of
Definitions
Neighbourhood of a point or set
An of a point (or subset) in a topological space is any open subset of that contains
A is any subset that contains open neighbourhood of ;
explicitly, is a neighbourhood of in if and only if there exists some open subset with .
Equivalently, a neighborhood of is any set that contains in its topological interior.
Importantly, a "neighbourhood" does have to be an open set; those neighbourhoods that also happen to be open sets are known as "open neighbourhoods."
Similarly, a neighbourhood that is also a closed (respectively, compact, connected, etc.) set is called a (respectively, , , etc.).
There are many other types of neighbourhoods that are used in topology and related fields like functional analysis.
The family of all neighbourhoods having a certain "useful" property often forms a neighbourhood basis, although many times, these neighbourhoods are not necessarily open. Locally compact spaces, for example, are those spaces that, at every point, have a neighbourhood basis consisting entirely of compact sets.
Neighbourhood filter
The neighbourhood system for a point (or non-empty subset) is a filter called the The neighbourhood filter for a point is the same as the neighbourhood filter of the singleton set
Neighbourhood basis
A or (or or ) for a point is a filter base of the neighbourhood filter; this means that it is a subset
such that for all there exists some such that
That is, for any neighbourhood we can find a neighbourhood in the neighbourhood basis that is contained in
Equivalently, is a local basis at if and only if the neighbourhood filter can be recovered from in the sense that the following equality holds:
A family is a neighbourhood basis for if and only if is a cofinal subset of with respect to the partial order (importantly, this partial order is the superset relation and not the subset relation).
Neighbourhood subbasis
A at is a family of subsets of each of which contains such that the collection of all possible finite intersections of elements of forms a neighbourhood basis at
Examples
If has its usual Euclidean topology then the neighborhoods of are all those subsets for which there exists some real number such that For example, all of the following sets are neighborhoods of in :
but none of the following sets are neighborhoods of :
where denotes the rational numbers.
If is an open subset of a topological space then for every is a neighborhood of in
More generally, if is any set and denotes the topological interior of in then is a neighborhood (in ) of every point and moreover, is a neighborhood of any other point.
Said differently, is a neighborhood of a point if and only if
Neighbourhood bases
In any topological space, the neighbourhood system for a point is also a neighbourhood basis for the point. The set of all open neighbourhoods at a point forms a neighbourhood basis at that point.
For any point in a metric space, the sequence of open balls around with radius form a countable neighbourhood basis . This means every metric space is first-countable.
Given a space with the indiscrete topology the neighbourhood system for any point only contains the whole space, .
In the weak topology on the space of measures on a space a neighbourhood base about is given by
where are continuous bounded functions from to the real numbers and are positive real numbers.
Seminormed spaces and topological groups
In a seminormed space, that is a vector space with the topology induced by a seminorm, all neighbourhood systems can be constructed by translation of the neighbourhood system for the origin,
This is because, by assumption, vector addition is separately continuous in the induced topology. Therefore, the topology is determined by its neighbourhood system at the origin. More generally, this remains true whenever the space is a topological group or the topology is defined by a pseudometric.
Properties
Suppose and let be a neighbourhood basis for in Make into a directed set by partially ordering it by superset inclusion Then is a neighborhood of in if and only if there exists an -indexed net in such that for every (which implies that in ).
See also
References
Bibliography
General topology | Neighbourhood system | [
"Mathematics"
] | 845 | [
"General topology",
"Topology"
] |
1,714,439 | https://en.wikipedia.org/wiki/Samarium%E2%80%93cobalt%20magnet | A samarium–cobalt (SmCo) magnet, a type of rare-earth magnet, is a strong permanent magnet made of two basic elements: samarium and cobalt.
They were developed in the early 1960s based on work done by Karl Strnat at Wright-Patterson Air Force Base and Alden Ray at the University of Dayton. In particular, Strnat and Ray developed the first formulation of SmCo5.
Samarium–cobalt magnets are generally ranked similarly in strength to neodymium magnets, but have higher temperature ratings and higher coercivity.
Attributes
Some attributes of samarium-cobalts are:
Samarium–cobalt magnets are extremely resistant to demagnetization.
These magnets have good temperature stability [(maximum use temperatures between and ]; Curie temperatures from to .
They are expensive and subject to price fluctuations (cobalt is market price sensitive).
Samarium–cobalt magnets have a strong resistance to corrosion and oxidation resistance, usually do not need to be coated and can be widely used in high temperature and poor working conditions.
They are brittle, and prone to cracking and chipping. Samarium–cobalt magnets have maximum energy products (BHmax) that range from 14 megagauss-oersteds (MG·Oe) to 33 MG·Oe, that is approx. 112 kJ/m3 to 264 kJ/m3; their theoretical limit is 34 MG·Oe, about 272 kJ/m3.
Sintered samarium–cobalt magnets exhibit magnetic anisotropy, meaning they can only be magnetized in the axis of their magnetic orientation. This is done by aligning the crystal structure of the material during the manufacturing process.
Series
Samarium–cobalt magnets are available in two "series", namely SmCo5 magnets and Sm2Co17 magnets.
Series 1:5
These samarium–cobalt magnet alloys (generally written as SmCo5, or SmCo Series 1:5) have one atom of rare-earth samarium per five atoms of cobalt. By weight, this magnet alloy will typically contain 36% samarium with the balance cobalt. The energy products of these samarium–cobalt alloys range from 16 MG·Oe to 25 MG·Oe, that is, approx. 128–200 kJ/m3. These samarium–cobalt magnets generally have a reversible temperature coefficient of -0.05%/°C. Saturation magnetization can be achieved with a moderate magnetizing field. This series of magnet is easier to calibrate to a specific magnetic field than the SmCo 2:17 series magnets.
In the presence of a moderately strong magnetic field, unmagnetized magnets of this series will try to align their orientation axis to the magnetic field, thus becoming slightly magnetized. This can be an issue if postprocessing requires that the magnet be plated or coated. The slight field that the magnet picks up can attract debris during the plating or coating process, causing coating failure or a mechanically out-of-tolerance condition.
Br drifts with temperature and it is one of the important characteristics of magnet performance. Some applications, such as inertial gyroscopes and travelling wave tubes (TWTs), need to have constant field over a wide temperature range. The reversible temperature coefficient (RTC) of Br is defined as
(∆Br/Br) x (1/∆T) × 100%.
To address these requirements, temperature compensated magnets were developed in the late 1970s. For conventional SmCo magnets, Br decreases as temperature increases. Conversely, for GdCo magnets, Br increases as temperature increases within certain temperature ranges. By combining samarium and gadolinium in the alloy, the temperature coefficient can be reduced to nearly zero.
SmCo5 magnets have a very high coercivity (coercive force); that is, they are not easily demagnetized. They are fabricated by packing wide-grain lone-domain magnetic powders. All of the magnetic domains are aligned with the easy axis direction. In this case, all of the domain walls are at 180 degrees. When there are no impurities, the reversal process of the bulk magnet is equivalent to lone-domain motes, where coherent rotation is the dominant mechanism. However, due to the imperfection of fabricating, impurities may be introduced in the magnets, which form nuclei. In this case, because the impurities may have lower anisotropy or misaligned easy axes, their directions of magnetization are easier to spin, which breaks the 180° domain wall configuration. In such materials, the coercivity is controlled by nucleation. To obtain much coercivity, impurity control is critical in the fabrication process.
Series 2:17
These alloys (written as Sm2Co17, or SmCo Series 2:17) are age-hardened with a composition of two atoms of rare-earth samarium per 13–17 atoms of transition metals (TM). The TM content is rich in cobalt, but contains other elements such as iron and copper. Other elements like zirconium, hafnium, and such may be added in small quantities to achieve better heat treatment response. By weight, the alloy will generally contain 25% of samarium. The maximum energy products of these alloys range from 20 to 32 MGOe, what is about 160-260 kJ/m3. These alloys have the best reversible temperature coefficient of all rare-earth alloys, typically being -0.03%/°C. The "second generation" materials can also be used at higher temperatures.
In Sm2Co17 magnets, the coercivity mechanism is based on domain wall pinning. Impurities inside the magnets impede the domain wall motion and thereby resist the magnetization reversal process. To increase the coercivity, impurities are intentionally added during the fabrication process.
Production
Samarium–cobalt alloys are typically machined in the unmagnetized state. Samarium–cobalt should be ground using a wet grinding process (water-based coolants) and a diamond grinding wheel. The same type of process is required if drilling holes or other features that are confined. The grinding waste produced must not be allowed to completely dry as samarium–cobalt has a low ignition point. A small spark, such as that produced with static electricity, can easily initiate combustion. The resulting fire produced can be extremely hot and difficult to control.
The reduction/melt method and reduction/diffusion method are used to manufacture samarium–cobalt magnets. The reduction/melt method will be described since it is used for both SmCo5 and Sm2Co17 production. The raw materials are melted in an induction furnace filled with argon gas. The mixture is cast into a mold and cooled with water to form an ingot. The ingot is pulverized and the particles are further milled to further reduce the particle size. The resulting powder is pressed in a die of desired shape, in a magnetic field to orient the magnetic field of the particles. Sintering is applied at a temperature of 1100˚C–1250˚C, followed by solution treatment at 1100˚C–1200˚C and tempering is finally performed on the magnet at about 700˚C–900˚C. It then is ground and further magnetized to increase its magnetic properties. The finished product is tested, inspected and packed.
Samarium can be substituted by a portion of other rare-earth elements including praseodymium, cerium, and gadolinium; the cobalt can be substituted by a portion of other transition metals including iron, copper, and zirconium.
Uses
Fender used one of designer Bill Lawrence's Samarium Cobalt Noiseless series of electric guitar pickups in Fender's Vintage Hot Rod '57 Stratocaster. These pickups were used in American Deluxe Series Guitars and Basses from 2004 until early 2010.
Samarium-cobalt (SmCo) magnets are used in aerospace and defense due to their exceptional magnetic properties. They are utilized in high-performance motors and actuators, precision sensors and gyroscopes, and satellite systems where stability and reliability are essential. They are also used in medical technologies, including MRI machines, pacemakers, and medical pumps.
In the mid-1980s some expensive headphones such as the Ross RE-278 used samarium–cobalt "Super Magnet" transducers.
Other uses include:
High-end electric motors used in the more competitive classes in slotcar racing
Turbomachinery
Traveling-wave tube field magnets
Applications that will require the system to function at cryogenic temperatures or very hot temperatures (over 180 °C)
Applications in which performance is required to be consistent with temperature change
Benchtop NMR spectrometers
Rotary encoders where it performs the function of magnetic actuator
See also
References
Cobalt alloys
Ferromagnetic materials
Loudspeaker technology
Magnetic alloys
Samarium compounds | Samarium–cobalt magnet | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,830 | [
"Ferromagnetic materials",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic alloys",
"Materials",
"Alloys",
"Matter",
"Cobalt alloys"
] |
1,715,797 | https://en.wikipedia.org/wiki/Water%20splitting | Water splitting is the chemical reaction in which water is broken down into oxygen and hydrogen:
Efficient and economical water splitting would be a technological breakthrough that could underpin a hydrogen economy. A version of water splitting occurs in photosynthesis, but hydrogen is not produced. The reverse of water splitting is the basis of the hydrogen fuel cell. Water splitting using solar radiation has not been commercialized.
Electrolysis
Electrolysis of water is the decomposition of water (H2O) into oxygen (O2) and hydrogen (H2):
Production of hydrogen from water is energy intensive. Usually, the electricity consumed is more valuable than the hydrogen produced, so this method has not been widely used. In contrast with low-temperature electrolysis, high-temperature electrolysis (HTE) of water converts more of the initial heat energy into chemical energy (hydrogen), potentially doubling efficiency to about 50%. Because some of the energy in HTE is supplied in the form of heat, less of the energy must be converted twice (from heat to electricity, and then to chemical form), and so the process is more efficient.
High-temperature electrolysis (also HTE or steam electrolysis) is a method for the production of hydrogen from water with oxygen as a by-product.
Water splitting in photosynthesis
A version of water splitting occurs in photosynthesis but the electrons are shunted, not to protons, but to the electron transport chain in photosystem II. The electrons are used to reduce carbon dioxide, which eventually becomes incorporated into sugars.
Photo-excitation of photosystem I initiates electron transfer to a series of electron acceptors, eventually reducing NADP+ to NADPH. The oxidized photosystem I captures electrons from photosystem II through a series of steps involving plastoquinone, cytochromes, and plastocyanin. Oxidized photosystem II oxidizes the oxygen-evolving complex (OEC), which converts water into O2 and protons. Since the active site of the OEC contains manganese, much research has aimed at synthetic Mn compounds as catalysts for water oxidation.
In biological hydrogen production, the electrons produced by the photosystem are shunted not to a chemical synthesis apparatus but to hydrogenases, resulting in formation of H2. This biohydrogen is produced in a bioreactor.
Photoelectrochemical water splitting
Using electricity produced by photovoltaic systems potentially offers the cleanest way to produce hydrogen, other than nuclear, wind, geothermal, and hydroelectric. Again, water is broken down into hydrogen and oxygen by electrolysis, but the electrical energy is obtained by a photoelectrochemical cell (PEC) process. The system is also named artificial photosynthesis.
Catalysis and proton-relay membranes are often the focus on development.
Photocatalytic water splitting
The conversion of solar energy into hydrogen by means of water splitting process might be more efficient if it is assisted by photocatalysts suspended in water rather than a photovoltaic or an electrolytic system, so that the reaction takes place in one step.
Radiolysis
Energetic nuclear radiation can break the chemical bonds of a water molecule. In the Mponeng gold mine, South Africa, researchers found in a naturally high radiation zone a community dominated by Desulforudis audaxviator, a new phylotype of Desulfotomaculum, feeding on primarily radiolytically produced H2.
Thermal decomposition of water
In thermolysis, water molecules split into hydrogen and oxygen. For example, at about three percent of all H2O are dissociated into various combinations of hydrogen and oxygen atoms, mostly H, H2, O, O2, and OH. Other reaction products like H2O2 or HO2 remain minor. At the very high temperature of more than half of the water molecules are decomposed. At ambient temperatures only one molecule in 100 trillion dissociates by the effect of heat. The high temperature requirements and material constraints have limited the applications of the thermal decomposition approach.
Other research includes thermolysis on defective carbon substrates, thus making hydrogen production possible at temperatures just under .
One side benefit of a nuclear reactor that produces both electricity and hydrogen is that it can shift production between the two. For instance, a nuclear plant might produce electricity during the day and hydrogen at night, matching its electrical generation profile to the daily variation in demand. If the hydrogen can be produced economically, this scheme would compete favorably with existing grid energy storage schemes. As of 2005, there was sufficient hydrogen demand in the United States that all daily peak generation could be handled by such plants.
The hybrid thermoelectric copper–chlorine cycle is a cogeneration system using the waste heat from nuclear reactors, specifically the CANDU supercritical water reactor.
Solar-thermal
Concentrated solar power can achieve the high temperatures necessary to split water. Hydrosol-2 is a 100-kilowatt pilot plant at the Plataforma Solar de Almería in Spain which uses sunlight to obtain the required to split water. Hydrosol II has been in operation since 2008. The design of this 100-kilowatt pilot plant is based on a modular concept. As a result, it may be possible that this technology could be readily scaled up to megawatt range by multiplying the available reactor units and by connecting the plant to heliostat fields (fields of sun-tracking mirrors) of a suitable size.
Material constraints due to the required high temperatures are reduced by the design of a membrane reactor with simultaneous extraction of hydrogen and oxygen that exploits a defined thermal gradient and the fast diffusion of hydrogen. With concentrated sunlight as heat source and only water in the reaction chamber, the produced gases are very clean with the only possible contaminant being water. A "Solar Water Cracker" with a concentrator of about 100 m can produce almost one kilogram of hydrogen per sunshine hour.
The sulfur–iodine cycle (S–I cycle) is a series of thermochemical processes used to produce hydrogen. The S–I cycle consists of three chemical reactions whose net reactant is water and whose net products are hydrogen and oxygen. All other chemicals are recycled. The S–I process requires an efficient source of heat.
More than 352 thermochemical cycles have been described for water splitting by thermolysis. These cycles promise to produce hydrogen and oxygen from water and heat without using electricity. Since all the input energy for such processes is heat, they can be more efficient than high-temperature electrolysis. This is because the efficiency of electricity production is inherently limited. Thermochemical production of hydrogen using chemical energy from coal or natural gas is generally not considered, because the direct chemical path is more efficient.
For all the thermochemical processes, the summary reaction is that of the decomposition of water:
2H2O <=>[\ce{Heat}] 2H2{} + O2
References
External links
Le dégagement de l'hydrogène lors de l'électrolyse à cathode de Ni oxydée, JEAC
Environmental chemistry
Fuels
Hydrogen production
Industrial gases | Water splitting | [
"Chemistry",
"Environmental_science"
] | 1,482 | [
"Chemical energy sources",
"Environmental chemistry",
"Industrial gases",
"nan",
"Fuels",
"Chemical process engineering"
] |
1,715,834 | https://en.wikipedia.org/wiki/Water%E2%80%93gas%20shift%20reaction | The water–gas shift reaction (WGSR) describes the reaction of carbon monoxide and water vapor to form carbon dioxide and hydrogen:
CO + H2O CO2 + H2
The water gas shift reaction was discovered by Italian physicist Felice Fontana in 1780. It was not until much later that the industrial value of this reaction was realized. Before the early 20th century, hydrogen was obtained by reacting steam under high pressure with iron to produce iron oxide and hydrogen. With the development of industrial processes that required hydrogen, such as the Haber–Bosch ammonia synthesis, a less expensive and more efficient method of hydrogen production was needed. As a resolution to this problem, the WGSR was combined with the gasification of coal to produce hydrogen.
Applications
The WGSR is a highly valuable industrial reaction that is used in the manufacture of ammonia, hydrocarbons, methanol, and hydrogen. Its most important application is in conjunction with the conversion of carbon monoxide from steam reforming of methane or other hydrocarbons in the production of hydrogen. In the Fischer–Tropsch process, the WGSR is one of the most important reactions used to balance the H2/CO ratio. It provides a source of hydrogen at the expense of carbon monoxide, which is important for the production of high purity hydrogen for use in ammonia synthesis.
The water–gas shift reaction may be an undesired side reaction in processes involving water and carbon monoxide, e.g. the rhodium-based Monsanto process. The iridium-based Cativa process uses less water, which suppresses this reaction.
Fuel cells
The WGSR can aid in the efficiency of fuel cells by increasing hydrogen production. The WGSR is considered a critical component in the reduction of carbon monoxide concentrations in cells that are susceptible to carbon monoxide poisoning such as the proton-exchange membrane (PEM) fuel cell. The benefits of this application are two-fold: not only would the water gas shift reaction effectively reduce the concentration of carbon monoxide, but it would also increase the efficiency of the fuel cells by increasing hydrogen production. Unfortunately, current commercial catalysts that are used in industrial water gas shift processes are not compatible with fuel cell applications. With the high demand for clean fuel and the critical role of the water gas shift reaction in hydrogen fuel cells, the development of water gas shift catalysts for the application in fuel cell technology is an area of current research interest.
Catalysts for fuel cell application would need to operate at low temperatures. Since the WGSR is slow at lower temperatures where equilibrium favors hydrogen production, WGS reactors require large amounts of catalysts, which increases their cost and size beyond practical application. The commercial LTS catalyst used in large scale industrial plants is also pyrophoric in its inactive state and therefore presents safety concerns for consumer applications. Developing a catalyst that can overcome these limitations is relevant to implementation of a hydrogen economy.
Sorption enhanced water gas shift
The WGS reaction is used in combination with the solid adsorption of CO2 in the sorption enhanced water gas shift (SEWGS) in order to produce a high pressure hydrogen stream from syngas.
Reaction conditions
The equilibrium of this reaction shows a significant temperature dependence and the equilibrium constant decreases with an increase in temperature, that is, higher hydrogen formation is observed at lower temperatures.
Temperature dependence
With increasing temperature, the reaction rate increases, but hydrogen production becomes less favorable thermodynamically since the water gas shift reaction is moderately exothermic; this shift in chemical equilibrium can be explained according to Le Chatelier's principle. Over the temperature range of 600–2000 K, the equilibrium constant for the WGSR has the following relationship:
Practical concerns
In order to take advantage of both the thermodynamics and kinetics of the reaction, the industrial scale water gas shift reaction is conducted in multiple adiabatic stages consisting of a high temperature shift (HTS) followed by a low temperature shift (LTS) with intersystem cooling. The initial HTS takes advantage of the high reaction rates, but results in incomplete conversion of carbon monoxide. A subsequent low temperature shift reactor lowers the carbon monoxide content to <1%. Commercial HTS catalysts are based on iron oxide–chromium oxide and the LTS catalyst is a copper-based. The copper catalyst is susceptible to poisoning by sulfur. Sulfur compounds are removed prior to the LTS reactor by a guard bed. An important limitation for the HTS is the H2O/CO ratio where low ratios may lead to side reactions such as the formation of metallic iron, methanation, carbon deposition, and the Fischer–Tropsch reaction.
High temperature shift catalysis
The typical composition of commercial HTS catalyst has been reported as 74.2% Fe2O3, 10.0% Cr2O3, 0.2% MgO (remaining percentage attributed to volatile components). The chromium acts to stabilize the iron oxide and prevents sintering. The operation of HTS catalysts occurs within the temperature range of 310 °C to 450 °C. The temperature increases along the length of the reactor due to the exothermic nature of the reaction. As such, the inlet temperature is maintained at 350 °C to prevent the exit temperature from exceeding 550 °C. Industrial reactors operate at a range from atmospheric pressure to 8375 kPa (82.7 atm). The search for high performance HT WGS catalysts remains an intensive topic of research in fields of chemistry and materials science. Activation energy is a key criteria for the assessment of catalytic performance in WGS reactions. To date, some of the lowest activation energy values have been found for catalysts consisting of copper nanoparticles on ceria support materials, with values as low as Ea = 34 kJ/mol reported relative to hydrogen generation.
Low temperature shift catalysis
Catalysts for the lower temperature WGS reaction are commonly based on copper or copper oxide loaded ceramic phases, While the most common supports include alumina or alumina with zinc oxide, other supports may include rare earth oxides, spinels or perovskites. A typical composition of a commercial LTS catalyst has been reported as 32-33% CuO, 34-53% ZnO, 15-33% Al2O3. The active catalytic species is CuO. The function of ZnO is to provide structural support as well as prevent the poisoning of copper by sulfur. The Al2O3 prevents dispersion and pellet shrinkage. The LTS shift reactor operates at a range of 200–250 °C. The upper temperature limit is due to the susceptibility of copper to thermal sintering. These lower temperatures also reduce the occurrence of side reactions that are observed in the case of the HTS. Noble metals such as platinum, supported on ceria, have also been used for LTS.
Mechanism
The WGSR has been extensively studied for over a hundred years. The kinetically relevant mechanism depends on the catalyst composition and the temperature. Two mechanisms have been proposed: an associative Langmuir–Hinshelwood mechanism and a redox mechanism. The redox mechanism is generally regarded as kinetically relevant during the high-temperature WGSR (> 350 °C) over the industrial iron-chromia catalyst. Historically, there has been much more controversy surrounding the mechanism at low temperatures. Recent experimental studies confirm that the associative carboxyl mechanism is the predominant low temperature pathway on metal-oxide-supported transition metal catalysts.
Associative mechanism
In 1920 Armstrong and Hilditch first proposed the associative mechanism. In this mechanism CO and H2O are adsorbed onto the surface of the catalyst, followed by formation of an intermediate and the desorption of H2 and CO2. In general, H2O dissociates onto the catalyst to yield adsorbed OH and H. The dissociated water reacts with CO to form a carboxyl or formate intermediate. The intermediate subsequently dehydrogenates to yield CO2 and adsorbed H. Two adsorbed H atoms recombine to form H2.
There has been significant controversy surrounding the kinetically relevant intermediate during the associative mechanism. Experimental studies indicate that both intermediates contribute to the reaction rate over metal oxide supported transition metal catalysts. However, the carboxyl pathway accounts for about 90% of the total rate owing to the thermodynamic stability of adsorbed formate on the oxide support. The active site for carboxyl formation consists of a metal atom adjacent to an adsorbed hydroxyl. This ensemble is readily formed at the metal-oxide interface and explains the much higher activity of oxide-supported transition metals relative to extended metal surfaces. The turn-over-frequency for the WGSR is proportional to the equilibrium constant of hydroxyl formation, which rationalizes why reducible oxide supports (e.g. CeO2) are more active than irreducible supports (e.g. SiO2) and extended metal surfaces (e.g. Pt). In contrast to the active site for carboxyl formation, formate formation occurs on extended metal surfaces. The formate intermediate can be eliminated during the WGSR by using oxide-supported atomically dispersed transition metal catalysts, further confirming the kinetic dominance of the carboxyl pathway.
Redox mechanism
The redox mechanism involves a change in the oxidation state of the catalytic material. In this mechanism, CO is oxidized by an O-atom intrinsically belonging to the catalytic material to form CO2. A water molecule undergoes dissociative adsorption at the newly formed O-vacancy to yield two hydroxyls. The hydroxyls disproportionate to yield H2 and return the catalytic surface back to its pre-reaction state.
Homogeneous models
The mechanism entails nucleophilic attack of water or hydroxide on a M-CO center, generating a metallacarboxylic acid.
Thermodynamics
The WGSR is exergonic, with the following thermodynamic parameters at room temperature (298 K)
{|class=wikitable
!Free energy
|ΔG⊖ = –28.6 kJ/mol
|-
!Enthalpy
|ΔH⊖ = –41.2 kJ/mol
|-
!Entropy
|ΔS⊖ = –41.84 J/K.mol
|}
In aqueous solution, the reaction is less exergonic.
Reverse water–gas shift
In the conversion of carbon dioxide to useful materials, the water–gas shift reaction is used to produce carbon monoxide from hydrogen and carbon dioxide. This is sometimes called the reverse water–gas shift reaction.
Water gas is defined as a fuel gas consisting mainly of carbon monoxide (CO) and hydrogen (H2). The term 'shift' in water–gas shift means changing the water gas composition (CO:H2) ratio. The ratio can be increased by adding CO2 or reduced by adding steam to the reactor.
See also
In situ resource utilization
Lane hydrogen producer
PROX
Industrial catalysts
Sorption enhanced water gas shift
Syngas
References
Inorganic reactions
Chemical processes
Hydrogen production
Industrial gases | Water–gas shift reaction | [
"Chemistry"
] | 2,315 | [
"Inorganic reactions",
"Chemical processes",
"Industrial gases",
"nan",
"Chemical process engineering"
] |
15,089,522 | https://en.wikipedia.org/wiki/Category%20of%20topological%20vector%20spaces | In mathematics, the category of topological vector spaces is the category whose objects are topological vector spaces and whose morphisms are continuous linear maps between them. This is a category because the composition of two continuous linear maps is again a continuous linear map. The category is often denoted TVect or TVS.
Fixing a topological field K, one can also consider the subcategory TVectK of topological vector spaces over K with continuous K-linear maps as the morphisms.
TVect is a concrete category
Like many categories, the category TVect is a concrete category, meaning its objects are sets with additional structure (i.e. a vector space structure and a topology) and its morphisms are functions preserving this structure. There are obvious forgetful functors into the category of topological spaces, the category of vector spaces and the category of sets.
TVect is a topological category
The category is topological, which means loosely speaking that it relates to its "underlying category", the category of vector spaces, in the same way that Top relates to Set. Formally, for every K-vector space and every family of topological K-vector spaces and K-linear maps there exists a vector space topology on so that the following property is fulfilled:
Whenever is a K-linear map from a topological K-vector space it holds that
is continuous is continuous.
The topological vector space is called "initial object" or "initial structure" with respect to the given data.
If one replaces "vector space" by "set" and "linear map" by "map", one gets a characterisation of the usual initial topologies in Top. This is the reason why categories with this property are called "topological".
There are numerous consequences of this property. For example:
"Discrete" and "indiscrete" objects exist. A topological vector space is indiscrete iff it is the initial structure with respect to the empty family. A topological vector space is discrete iff it is the initial structure with respect to the family of all possible linear maps into all topological vector spaces. (This family is a proper class, but that does not matter: Initial structures with respect to all classes exists iff they exists with respect to all sets)
Final structures (the similar defined analogue to final topologies) exist. But there is a catch: While the initial structure of the above property is in fact the usual initial topology on with respect to , the final structures do not need to be final with respect to given maps in the sense of Top. For example: The discrete objects (= final with respect to the empty family) in do not carry the discrete topology.
Since the following diagram of forgetful functors commutes
and the forgetful functor from to Set is right adjoint, the forgetful functor from to Top is right adjoint too (and the corresponding left adjoints fit in an analogue commutative diagram). This left adjoint defines "free topological vector spaces". Explicitly these are free K-vector spaces equipped with a certain initial topology.
Since is (co)complete, is (co)complete too.
See also
References
Topological vector spaces
Topological vector spaces | Category of topological vector spaces | [
"Mathematics"
] | 649 | [
"Mathematical structures",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Category theory",
"Categories in category theory"
] |
15,094,186 | https://en.wikipedia.org/wiki/Graph%20automorphism | In the mathematical field of graph theory, an automorphism of a graph is a form of symmetry in which the graph is mapped onto itself while preserving the edge–vertex connectivity.
Formally, an automorphism of a graph is a permutation of the vertex set , such that the pair of vertices form an edge if and only if the pair also form an edge. That is, it is a graph isomorphism from to itself. Automorphisms may be defined in this way both for directed graphs and for undirected graphs.
The composition of two automorphisms is another automorphism, and the set of automorphisms of a given graph, under the composition operation, forms a group, the automorphism group of the graph. In the opposite direction, by Frucht's theorem, all groups can be represented as the automorphism group of a connected graph – indeed, of a cubic graph.
Computational complexity
Constructing the automorphism group of a graph, in the form of a list of generators, is polynomial-time equivalent to the graph isomorphism problem, and therefore solvable in quasi-polynomial time, that is with running time for some fixed .
Consequently, like the graph isomorphism problem, the problem of finding a graph's automorphism group is known to belong to the complexity class NP, but not known to be in P nor to be NP-complete, and therefore may be NP-intermediate.
The easier problem of testing whether a graph has any symmetries (nontrivial automorphisms), known as the graph automorphism problem, also has no known polynomial time solution.
There is a polynomial time algorithm for solving the graph automorphism problem for graphs where vertex degrees are bounded by a constant.
The graph automorphism problem is polynomial-time many-one reducible to the graph isomorphism problem, but the converse reduction is unknown. By contrast, hardness is known when the automorphisms are constrained in a certain fashion; for instance, determining the existence of a fixed-point-free automorphism (an automorphism that fixes no vertex) is NP-complete, and the problem of counting such automorphisms is ♯P-complete.
Algorithms, software and applications
While no worst-case polynomial-time algorithms are known for the general Graph Automorphism problem, finding the automorphism group (and printing out an irredundant set of generators) for many large graphs arising in applications is rather easy. Several open-source software tools are available for this task, including NAUTY, BLISS and SAUCY. SAUCY and BLISS are particularly efficient for sparse graphs, e.g., SAUCY processes some graphs with millions of vertices in mere seconds. However, BLISS and NAUTY can also produce Canonical Labeling, whereas SAUCY is currently optimized for solving Graph Automorphism. An important observation is that for a graph on vertices, the automorphism group can be specified by no more than generators, and the above software packages are guaranteed to satisfy this bound as a side-effect of their algorithms (minimal sets of generators are harder to find and are not particularly useful in practice). It also appears that the total support (i.e., the number of vertices moved) of all generators is limited by a linear function of , which is important in runtime analysis of these algorithms. However, this has not been established for a fact, as of March 2012.
Practical applications of Graph Automorphism include graph drawing and other visualization tasks, solving structured instances of Boolean Satisfiability arising in the context of Formal verification and Logistics. Molecular symmetry can predict or explain chemical properties.
Symmetry display
Several graph drawing researchers have investigated algorithms for drawing graphs in such a way that the automorphisms of the graph become visible as symmetries of the drawing. This may be done either by using a method that is not designed around symmetries, but that automatically generates symmetric drawings when possible, or by explicitly identifying symmetries and using them to guide vertex placement in the drawing. It is not always possible to display all symmetries of the graph simultaneously, so it may be necessary to choose which symmetries to display and which to leave unvisualized.
Graph families defined by their automorphisms
Several families of graphs are defined by having certain types of automorphisms:
An asymmetric graph is an undirected graph with only the trivial automorphism.
A vertex-transitive graph is an undirected graph in which every vertex may be mapped by an automorphism into any other vertex.
An edge-transitive graph is an undirected graph in which every edge may be mapped by an automorphism into any other edge.
A symmetric graph is a graph such that every pair of adjacent vertices may be mapped by an automorphism into any other pair of adjacent vertices.
A distance-transitive graph is a graph such that every pair of vertices may be mapped by an automorphism into any other pair of vertices that are the same distance apart.
A semi-symmetric graph is a graph that is edge-transitive but not vertex-transitive.
A half-transitive graph is a graph that is vertex-transitive and edge-transitive but not symmetric.
A skew-symmetric graph is a directed graph together with a permutation σ on the vertices that maps edges to edges but reverses the direction of each edge. Additionally, σ is required to be an involution.
Inclusion relationships between these families are indicated by the following table:
See also
Algebraic graph theory
Distinguishing coloring
References
External links
Algebraic graph theory
de:Automorphismus#Graphen | Graph automorphism | [
"Mathematics"
] | 1,139 | [
"Mathematical relations",
"Graph theory",
"Algebra",
"Algebraic graph theory"
] |
15,096,444 | https://en.wikipedia.org/wiki/Paper%20model | Paper models, also called card models or papercraft, are models constructed mainly from sheets of heavy paper, paperboard, card stock, or foam.
Details
This may be considered a broad category that contains origami and card modeling. Origami is the process of making a paper model by folding a single piece of paper without using glue or cutting while the variation kirigami does. Card modeling is making scale models from sheets of cardstock on which the parts were printed, usually in full color. These pieces would be cut out, folded, scored, and glued together. Papercraft is the art of combining these model types to build complex creations such as wearable suits of armor, life-size characters, and accurate weapon models.
Sometimes the model pieces can be punched out. More frequently the printed parts must be cut out. Edges may be scored to aid folding. The parts are usually glued together with polyvinyl acetate glue ("white glue", "PVA"). In this kind of modeling, the sections are usually pre-painted, so there is no need to paint the model after completion. Some enthusiasts may enhance the model by painting and detailing. Due to the nature of the paper medium, the model may be sealed with varnish or filled with spray foam to last longer. Some enthusiasts also use papercrafts or perdurable to do life-sized props starting by making the craft, covering it with resin and painting them. Some also use photo paper and laminate them by heat, thus preventing the printed side from color wearing out, beyond the improved realistic effect on certain kinds of models (ships, cars, buses, trains, etc.). Paper crafts can be used as references to do props with other materials too.
History
The first paper models appeared in Europe in the 17th century with the earliest commercial models were appearing in French toy catalogs in 1800. Printed card became common in magazines in the early part of the 20th century. The popularity of card modeling boomed during World War II when the paper was one of the few items whose use and production was not heavily regulated.
Micromodels, designed and published in England from 1941 were very popular with 100 different models, including architecture, ships, and aircraft. But as plastic model kits became more commonly available, interest in paper decreased.
Availability
The Robert Freidus Collection, held at the V&A Museum of Childhood has over 14000 card models exclusively in the category Architectural Paper Models. Since paper model patterns can be easily printed and assembled, the Internet has become a popular means of exchanging them. Commercial corporations have recently begun using downloadable paper models for their marketing (examples are Yamaha and Canon).
The availability of numerous models on the Internet at little or no cost, which can then be downloaded and printed on inexpensive inkjet printers has caused its popularity again to increase worldwide. Home printing also allows models to be scaled up or down easily (for example, in order to make two models from different authors, in different scales, match each other in size), although the paper weight might need to be adjusted in the same ratio.
Inexpensive kits are available from dedicated publishers (mostly based in Eastern Europe; examples include Halinski, JSC Models, and Maly Modelarz), a portion of the catalog of which date back to 1950.
Experienced hobbyists often scratchbuild models, either by first hand drawing or using software such as Adobe Illustrator and Inkscape. An historical example of highly specialized software is Designer Castles for BBC Micro and Acorn Archimedes platforms, which was developed as a tool for creation of card model castles. CAD and CG software, such as Rhino 3D, 3DS Max, Blender, and specialist software, like Pepakura Designer from Tama Software, Dunreeb Cutout or Ultimate Papercraft 3D, may be employed to convert 3D computer models into two-dimensional printable templates for assembly.
3D models to paper
The use of 3D models greatly assists in the construction of paper models, with video game models being the most prevalent source. The video game or source in question will have to be loaded into the computer. Various methods of extracting the model exist, including using a model viewer and exporting it into a workable file type, or capturing the model from the emulation directly. The methods of capturing the model are often unique to the subject and the tools available. Readability of file-formats including proprietary ones could mean that a model viewer and exporter is unavailable outside of the developer. Using other tools that capture rendered 3D models and textures is often the only way to obtain them. In this case, the designer may have to arrange the textures and the wireframe model on a 3D program, such as SketchUp, 3DS MAX, Metasequoia, or Blender before exporting it to a papercraft creating program, such as Dunreeb Cutout or Pepakura Designer by Tama Software. From there the model is typically refined to give a proper layout and construction tabs that will affect the overall appearance and difficulty in constructing the model.
Subjects
Because people can create their own patterns, paper models are limited only by their designers' imaginations and ability to manipulate paper into forms. Vehicles of all forms, from cars and cargo trucks to space shuttles, are a frequent subject of paper models, some using photo-realistic textures from their real-life counterparts for extremely fine details. Architecture models can be very simple and crude forms to very detailed models with thousands of pieces to assemble. The most prevalent designs are from video games, due to their popularity and ease of producing paper models.
On the Web, enthusiasts can find hundreds of models from different designers across a wide range of subjects. The models include very difficult and ambitious paper projects, such as life-sized and complex creations. Architectural paper models are popular with model railway enthusiasts.
Various models are used in tabletop gaming, primarily wargaming. Scale paper models allow for easy production of armies and buildings for use in gaming and that can be scaled up or down readily or produced as desired. Whether they be three-dimensional models or two-dimensional icons, players are able to personalize and modify the models to bear unique unit designations and insignias for gaming.
See also
Net
Cardboard modeling
Paper Aeroplane
Origamic architecture
Superquick
Leo Monahan
Omocha-e
References
External links
Software for creating paper models
Pepakura Designer
Dunreeb Cutout
Ultimate Papercraft 3D
PaperMaker
Paper products
Scale modeling
Paper toys | Paper model | [
"Physics"
] | 1,316 | [
"Scale modeling"
] |
2,390,407 | https://en.wikipedia.org/wiki/Dogic | The Dogic () is an icosahedron-shaped puzzle like the Rubik's Cube. The 5 triangles meeting at its tips may be rotated, or 5 entire faces (including the triangles) around the tip may be rotated. It has a total of 80 movable pieces to rearrange, compared to the 20 pieces in the Rubik's Cube.
History
The Dogic was patented by Zsolt and Robert Vecsei in Hungary on 20 October 1993. The patent was granted 28 July 1998 (HU214709). It was originally sold by VECSO in two variants under the names "Dogic" and "Dogic 2", but was only produced in quantities far short of the demand.
In 2004, Uwe Mèffert acquired the plastic molds from its original manufacturer at the request of puzzle fans and collectors worldwide, and made another production run of the Dogics. These Dogics were first shipped in January 2005, and are now being sold by Meffert in his puzzle shop, Meffert's until September 2010 when the lack of interest for Meffert's Dogics made Uwe Meffert stop his Dogic production run.
According to Uwe Mèffert, 2000 units have been produced by him.
Description
The basic design of the Dogic is an icosahedron cut into 60 triangular pieces around its 12 tips and 20 face centers. All 80 pieces can move relative to each other. There are also a good number of internal moving pieces inside the puzzle, which are necessary to keep it in one piece as its surface pieces are rearranged. There are two types of twists that it can undergo: a shallow twist which rotates the 5 triangles around a single tip, and a deep twist which rotates 5 entire faces (including the triangles around the tip) around the tip. The shallow twist moves the triangles between faces but keeps them around the same tip; the deeper twist moves the triangles between the 5 tips lying at the base of the rotated faces but keeps them on the same faces. Each triangle has a single color, while the face centers may have up to 3 colors, depending on the particular coloring scheme employed.
Solutions
The solutions for the different versions of the Dogic differ.
The 12-color Dogic is the more challenging version, where the face centers must be rearranged to match the colors of the face centers in adjacent faces. The triangles must then match the corresponding colors in the face centers. The face centers are mathematically equivalent to the corner pieces of the Megaminx, and so the same algorithms may be used for solving either. The triangles are relatively easy to solve once the face centers are in place, because the 5 triangles per tip are identical in color and may be freely interchanged.
The 10-color Dogic is slightly less challenging, since there is no unique solved state: the face centers may be randomly placed relative to each other, and the result would still look 'solved'. However, it may still be desirable to put them in aesthetically pleasing arrangements, such as pairing up faces of the same color, as depicted in the second photograph. The triangles are slightly more tricky to solve than in the 12-color Dogic, because adjacent triangles in the solved state are not the same color and so cannot be freely interchanged.
The 5-color and 2-color Dogics are even less of a challenge, since there is a large number of identical pieces. These simpler versions cater to those puzzle fans who are not yet at the skill level to manage the full complexity of the 12-color Dogic.
Number of combinations
Due to different numbers of visually identical pieces in the two versions of the puzzle, they each have a different number of possible combinations. There are 60 tip pieces and 20 centres with 3 orientations, giving a theoretical maximum of 60!·20!·320 positions. This limit is not reached on either puzzle, due to reducing factors detailed below.
12-color Dogic
Only even permutations of centres are possible (2)
The orientation of the first 19 centres determines the orientation of the last centre. (3)
Some tip pieces are indistinguishable (5!12)
The orientation of the puzzle does not matter (60): all 60 possible positions and orientations of the first center are equivalent because of the lack of fixed reference points.
This leaves positions for the 12-color Dogic.
The precise figure is 21 991 107 793 244 335 592 538 616 581 443 187 569 604 232 889 165 919 156 829 382 848 981 603 083 878 400 000 (roughly 22 sesvigintillion on the short scale or 22 tredecilliard on the long scale).
10-color Dogic
Only even permutations of the centres are possible (2)
Centre orientation does not matter (320)
Ten of the centres are visually identical to the other ten (210)
Some tip pieces are indistinguishable (6!10)
The orientation of the puzzle does not matter (60)
This leaves positions for the 10-color Dogic.
The precise figure is 4 400 411 583 858 825 100 777 127 453 704 140 502 784 413 155 112 522 644 357 120 000 000 (roughly 4.4 unvigintillion on the short scale or 4.4 undecillion on the long scale).
See also
Rubik's Cube
Pyraminx
Skewb Diamond
Impossiball
Megaminx
Combination puzzles
Mechanical puzzles
References
Jaap's Dogic page, which includes solutions and some brief historical data.
The Magic Polyhedra Patent Page
Mechanical puzzles
Combination puzzles | Dogic | [
"Mathematics"
] | 1,174 | [
"Recreational mathematics",
"Mechanical puzzles"
] |
2,390,915 | https://en.wikipedia.org/wiki/Electron%20spectrometer | An electron spectrometer is a device used to perform different forms of electron spectroscopy and electron microscopy. This requires analyzing the energy of an incoming beam of electrons. Most electron spectrometers use a hemispherical electron energy analyzer in which the beam of electrons is bent with electric or magnetic fields. Higher energy electrons will be bent less by the beam, this produces a spatially distributed range of energies.
Electron spectrometers are used on a range of scientific equipment, including particle accelerators, transmission electron microscopes, and astronomical satellites.
Types
Electron spectrometers may determine electron energy based on time of flight, retarding potential (effectively a high-pass filter), resonant collision or curvature in a deflecting field (magnetic or electric).
An electrostatic electron spectrometer uses the electric field, which cause electrons to move along field gradients, whereas magnetic devices cause electrons to move at right angles to the field. Magnetic fields will act in a direction perpendicular to the electron propagation, thereby conserving velocity, whereas electrostatic fields will cause electrons to move along the field gradient, which may change electron energies if the component of the direction of propagation and field gradients are not perpendicular. Owing to these effects, sector based designs are commonly used in electron spectrometers.
Construction
The effective potential in the solution of motion in a magnetic or electric system with rotational symmetry leads to radial focusing onto a mean radius. By superposition of a quadrupole field axial focusing is possible while weakening the radial focusing, until the astigmatism vanishes. By breaking the rotational symmetry a bit and varying the electrostatic potential along the mean path of the spherical aberration is minimized.
All the electrons from an isotopic source may be sucked off and focused into a directed beam (much like in an electron gun), which can then be analyzed. The spectrometer can use entrance and exit slits or use a small source, which only emits into specific angle and a small detector. Photoelectron spectra from single crystals exhibit a dependency on the emission angle, and the entrance slit is needed at the entrance of the hemispherical electron analyzer in angle-resolved photoemission spectroscopy and related techniques. There, a position sensitive detector detects the energy along one direction and depending on the additional optics lateral resolution or one angle along the other direction.
Electrostatic spectrometers preserve the spin, which can be resolved afterwards.
See also
Angle-resolved photoemission spectroscopy, for electronic band structure determination
Auger electron spectroscopy, field of analyzing material surfaces
Electron energy loss spectroscopy
PEEM
Energy filtered transmission electron microscopy
Mass spectrometry
Time-of-flight mass spectrometry
References
Electron beam
Spectrometers
Electron spectroscopy
Microscope components | Electron spectrometer | [
"Physics",
"Chemistry",
"Astronomy"
] | 566 | [
"Spectroscopy stubs",
"Electron",
"Spectrum (physical sciences)",
"Electron spectroscopy",
"Electron beam",
"Astronomy stubs",
"Spectrometers",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
2,392,005 | https://en.wikipedia.org/wiki/Blum%20axioms | In computational complexity theory the Blum axioms or Blum complexity axioms are axioms that specify desirable properties of complexity measures on the set of computable functions. The axioms were first defined by Manuel Blum in 1967.
Importantly, Blum's speedup theorem and the Gap theorem hold for any complexity measure satisfying these axioms. The most well-known measures satisfying these axioms are those of time (i.e., running time) and space (i.e., memory usage).
Definitions
A Blum complexity measure is a pair with a numbering of the partial computable functions and a computable function
which satisfies the following Blum axioms. We write for the i-th partial computable function under the Gödel numbering , and for the partial computable function .
the domains of and are identical.
the set is recursive.
Examples
is a complexity measure, if is either the time or the memory (or some suitable combination thereof) required for the computation coded by i.
is not a complexity measure, since it fails the second axiom.
Complexity classes
For a total computable function complexity classes of computable functions can be defined as
is the set of all computable functions with a complexity less than . is the set of all boolean-valued functions with a complexity less than . If we consider those functions as indicator functions on sets, can be thought of as a complexity class of sets.
References
Structural complexity theory
Mathematical axioms | Blum axioms | [
"Mathematics"
] | 307 | [
"Mathematical logic",
"Mathematical axioms"
] |
2,392,113 | https://en.wikipedia.org/wiki/Stretched%20tuning | Stretched tuning is a detail of musical tuning, applied to wire-stringed musical instruments, older, non-digital electric pianos (such as the Fender Rhodes piano and Wurlitzer electric piano), and some sample-based synthesizers based on these instruments, to accommodate the natural inharmonicity of their vibrating elements. In stretched tuning, two notes an octave apart, whose fundamental frequencies theoretically have an exact 2:1 ratio, are tuned slightly farther apart (a stretched octave). If the frequency ratios of octaves are greater than a factor of 2, the tuning is stretched; if smaller than a factor of 2, it is compressed."
Melodic stretch refers to tunings with fundamentals stretched relative to each other, while harmonic stretch refers to tunings with harmonics stretched relative to fundamentals which are not stretched. For example, the piano features both stretched harmonics and, to accommodate those, stretched fundamentals.
Fundamentals and harmonics
In most musical instruments, the tone-generating component (a string or resonant column of air) vibrates at many frequencies simultaneously: a fundamental frequency that is usually perceived as the pitch of the note, and harmonics or overtones that are multiples of the fundamental frequency and whose wavelengths therefore divide the tone-generating region into simple fractional segments (1/2, 1/3, 1/4, etc.). (See harmonic series.) The fundamental note and its harmonics sound together, and the amplitude relationships among them strongly affect the perceived tone or timbre of the instrument.
In the acoustic piano, harpsichord, and clavichord, the vibrating element is a metal wire or string; in many non-digital electric pianos, it is a tapered metal tine (Rhodes piano) or reed (Wurlitzer electric piano) with one end clamped and the other free to vibrate. Each note on the keyboard has its own separate vibrating element whose tension and/or length and weight determines its fundamental frequency or pitch. In electric pianos, the motion of the vibrating element is sensed by an electromagnetic pickup and amplified electronically.
Intervals and inharmonicity
In tuning, the relationship between two notes (known musically as an interval) is determined by evaluating their common harmonics. For example, we say two notes are an octave apart when the fundamental frequency of the upper note exactly matches the second harmonic of the lower note. Theoretically, this means the fundamental frequency of the upper note is exactly twice that of the lower note, and we would assume that the second harmonic of the upper note will exactly match the fourth harmonic of the lower note.
On instruments strung with metal wire, however, neither of these assumptions is valid, and inharmonicity is the reason.
Inharmonicity refers to the difference between the theoretical and actual frequencies of the harmonics or overtones of a vibrating tine or string. The theoretical frequency of the second harmonic is twice the fundamental frequency, and of the third harmonic is three times the fundamental frequency, and so on. But on metal strings, tines, and reeds, the measured frequencies of those harmonics are slightly higher, and proportionately more so in the higher than in the lower harmonics. A digital emulation of these instruments must recreate this inharmonicity if it is to sound convincing.
The theory of temperaments in musical tuning do not normally take into account inharmonicity, which varies from instrument to instrument (and from string to string), but in practice the amount of inharmonicity present in a particular instrument will effect a modification to the theoretical temperament which is being applied to it.
Vibration of wire strings
When a stretched wire string is excited into motion by plucking or striking, a complex wave travels outward to the ends of the string. As it travels outward, this initial impulse forces the wire out of its resting position all along its length. After the impulse has passed, each part of the wire immediately begins to return toward (and overshoot) its resting position, which means vibration has been induced. Meanwhile, the initial impulse is reflected at both ends of the string and travels back toward the center. On the way, it interacts with the various vibrations it induced on the initial pass, and these interactions reduce or cancel some components of the impulse wave and reinforce others. When the reflected impulses encounter each other, their interaction again cancels some components and reinforces others.
Within a few transits of the string, all these cancellations and reinforcements sort the vibration into an orderly set of waves that vibrate over 1/1, 1/2, 1/3, 1/4, 1/5, 1/6, etc. of the length of the string. These are the harmonics. As a rule, the amplitude of its vibration is less for higher harmonics than for lower, meaning that higher harmonics are softer—though the details of this differ from instrument to instrument. The exact combination of different harmonics and their amplitudes is a primary factor affecting the timbre or tone quality of a particular musical tone.
In an ideal plain string, vibration over half the string's length will be twice as fast as its fundamental vibration, vibration over a third will be three times as fast, and so on. In this kind of string, the only force acting to return any part of it to its resting position is the tension between the string's ends. Strings for low and mid-range tones, however, typically consist of a core that is wound with another, thinner piece of wire. This makes them naturally resistant to being bent, adding to the effect of string tension in returning a given part of the string toward its resting position; the result is a comparatively higher frequency of vibration of wound strings. Since rigidity is constant, its effect is greater for shorter wavelengths, i.e. in higher harmonics.
Tines and reeds
Tines and reeds differ from strings in that they are held at one end and free to vibrate at the other. The frequencies of their fundamental and harmonic vibrations are subject to the same inharmonicity as strings. However, because of the comparative thickness of the bars that terminate the tines in an electric piano, the larger (and stronger) vibrations tend to "see" termination points slightly deeper in the bar than do smaller, weaker vibrations. This enhances inharmonicity in tines.
Effects on tuning
Inharmonicity alters harmonics beyond their theoretical frequencies. As the overtone series progresses, each partial becomes proportionally sharper. Thus, in our example of an octave, exactly matching the lowest common harmonic causes a slight amount of stretch; matching the next higher common harmonic causes a greater amount of stretch; and so on. If the interval is two octaves plus a fifth (the favored means of cross-checking the stretch of the upper treble of the piano), exactly matching the upper note to the sixth harmonic of the lowest requires great sophistication of octave stretch to make the lower individual octaves, its double and triple octaves, and their other intervallic relationships to sound pure and balanced.
Solving such dilemmas is at the heart of precise tuning by ear, and all solutions involve some stretching of the higher notes upward and the lower notes downward from their theoretical frequencies. In shorter strings (such as on spinet pianos), the wire stiffness in the tenor and bass registers is proportionately high; this leads to a timbre which is generally poorer, due to the higher inharmonicity and octave stretch, creating significant compromises to what is considered acceptable tuning. On longer strings, such as on concert grand or even moderately sized grand pianos, this effect is greatly reduced. Online sources suggest that the total amount of "stretch" over the full range of a piano may be on the order of ±35 cents: this also appears in the empirical Railsback curve.
See also
Electronic tuner
Piano acoustics
Piano tuning
References
Further information
Five lectures on the acoustics of the piano
External links
"Octave Types", BillBremmer.com.
Acoustics
Musical tuning | Stretched tuning | [
"Physics"
] | 1,647 | [
"Classical mechanics",
"Acoustics"
] |
2,392,912 | https://en.wikipedia.org/wiki/Peek%27s%20law | In physics, Peek's law defines the electric potential gap necessary for triggering a corona discharge between two wires:
ev is the "visual critical corona voltage" or "corona inception voltage" (CIV), the voltage required to initiate a visible corona discharge between the wires. It is named after Frank William Peek (1881–1933).
mv is an irregularity factor to account for the condition of the wires. For smooth, polished wires, mv = 1. For roughened, dirty or weathered wires, 0.98 to 0.93, and for cables, 0.87 to 0.83, namely the surface irregularities result in diminishing the corona threshold voltage.
r is the radius of the wires in cm.
S is the distance between the center of the wires.
gv is the "visual critical" electric field, and is given by:
δ is the air density factor with respect to SATP (25°C and 76 cmHg):
g0 is the "disruptive electric field."
c is an empirical dimensional constant.
The values for the last two parameters are usually considered to be about 30-32 kV/cm (in air) and 0.301 cm½ respectively. This latter law can be considered to hold also in different setups, where the corresponding voltage is different due to geometric reasons.
References
High Voltage Engineering Fundamentals, E.Kuffel and WS Zaengl, Pergamon Press, p366
Plasma physics equations | Peek's law | [
"Physics"
] | 300 | [
"Equations of physics",
"Plasma physics equations"
] |
2,393,371 | https://en.wikipedia.org/wiki/Quazepam | Quazepam, sold under the brand name Doral among others, is a relatively long-acting benzodiazepine derivative drug developed by the Schering Corporation in the 1970s. Quazepam is used for the treatment of insomnia, including sleep induction and sleep maintenance. Quazepam induces impairment of motor function and has relatively (and uniquely) selective hypnotic and anticonvulsant properties with considerably less overdose potential than other benzodiazepines (due to its novel receptor-subtype selectivity). Quazepam is an effective hypnotic which induces and maintains sleep without disruption of the sleep architecture.
It was patented in 1970 and came into medical use in 1985.
Medical uses
Quazepam is used for short-term treatment of insomnia related to sleep induction or sleep maintenance problems and has demonstrated superiority over other benzodiazepines, such as temazepam. It had a lower incidence of side effects than temazepam, including less sedation, amnesia, and motor impairment. Usual dosage is 7.5 to 15 mg orally at bedtime.
Quazepam is effective as a premedication prior to surgery.
Side effects
Quazepam has fewer side effects than other benzodiazepines and less potential to induce tolerance and rebound effects. There is significantly less potential for quazepam to induce respiratory depression or to adversely affect motor coordination than other benzodiazepines. The different side effect profile of quazepam may be due to its more selective binding profile to type 1 benzodiazepine receptors.
Ataxia
Daytime somnolence
Hypokinesia
Cognitive and performance impairments
In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Tolerance and dependence
Tolerance may occur to quazepam, but more slowly than seen with other benzodiazepines such as triazolam. Quazepam causes significantly less drug tolerance and withdrawal symptoms including less rebound insomnia upon discontinuation compared to other benzodiazepines. Quazepam may cause less rebound effects than other type1 benzodiazepine receptor selective nonbenzodiazepine drugs due to its longer half-life. Short-acting hypnotics often cause next-day rebound anxiety. Quazepam, due to its pharmacological profile, does not cause next-day rebound withdrawal effects during treatment.
No firm conclusions can be drawn, however, about whether long-term use of quazepam does not produce tolerance, as few, if any, long-term clinical trials extending beyond 4 weeks of chronic use have been conducted. Quazepam should be withdrawn gradually if used beyond 4 weeks of use to avoid the risk of a severe benzodiazepine withdrawal syndrome developing. Very high dosage administration over prolonged periods of time, up to 52 weeks, of quazepam in animal studies provoked severe withdrawal symptoms upon abrupt discontinuation, including excitability, hyperactivity, convulsions, and the death of two of the monkeys due to withdrawal-related convulsions. More monkeys died however, in the diazepam-treated monkeys. In addition, it has now been documented in the medical literature that one of the major metabolites of quazepam, N-desalkyl-2-oxoquazepam (N-desalkylflurazepam), which is long-acting and prone to accumulation, binds unselectively to benzodiazepine receptors, thus quazepam may not differ all that much pharmacologically from other benzodiazepines.
Special precautions
Benzodiazepines require special precaution if used during pregnancy, in children, alcohol, or drug-dependent individuals, and individuals with comorbid psychiatric disorders.
Quazepam and its active metabolites are excreted into breast milk.
Accumulation of one of the active metabolites of quazepam, N-desalkylflurazepam, may occur in the elderly. A lower dose may be required for the elderly.
Elderly
Quazepam is more tolerable for elderly patients compared to flurazepam due to its reduced next-day impairments. However, another study showed marked next-day impairments after repeated administration due to the accumulation of quazepam and its long-acting metabolites. Thus, the medical literature shows conflicts on quazepam's side effect profile. A further study showed significant balance impairments combined with an unstable posture after administration of quazepam in test subjects.
An extensive review of the medical literature regarding the management of insomnia and the elderly found that there is considerable evidence of the effectiveness and durability of non-drug treatments for insomnia in adults of all ages and that these interventions are underutilized. Compared with the benzodiazepines, including quazepam, the nonbenzodiazepine sedative/hypnotics appeared to offer few, if any, significant clinical advantages in efficacy or tolerability in elderly persons. It was found that newer agents with novel mechanisms of action and improved safety profiles, such as melatonin agonists, hold promise for the management of chronic insomnia in elderly people. Long-term use of sedative/hypnotics for insomnia lacks an evidence base and has traditionally been discouraged for reasons that include concerns about such potential adverse drug effects as cognitive impairment (anterograde amnesia), daytime sedation, motor incoordination, and increased risk of motor vehicle accidents and falls. In addition, the effectiveness and safety of long-term use of these agents remain to be determined. It was concluded that more research is needed to evaluate the long-term effects of treatment and the most appropriate management strategy for elderly people with chronic insomnia.
Interactions
The absorption rate is likely to be significantly reduced if quazepam is taken in a fasted state, reducing the hypnotic effect of quazepam. If 3 or more hours have passed since eating food, then some food should be eaten before taking quazepam.
Pharmacology
Quazepam is a trifluoroalkyl type of benzodiazepine. Quazepam is unique amongst benzodiazepines in that it selectively targets the GABAA α1 subunit receptors, which are responsible for inducing sleep. Its mechanism of action is very similar to zolpidem and zaleplon in its pharmacology and can successfully substitute for zolpidem and zaleplon in animal studies.
Quazepam is selective for type I benzodiazepine receptors containing the α1 subunit, similar to other drugs such as zaleplon and zolpidem. As a result, quazepam has little or no muscle-relaxant properties. Most other benzodiazepines are unselective and bind to type1 GABAA receptors and type2 GABAA receptors. Type1 GABAA receptors include the α1 subunit containing GABAA receptors, which are responsible for the hypnotic properties of the drug. Type 2 receptors include the α2, α3 and α5 subunits, which are responsible for anxiolytic action, amnesia, and muscle relaxant properties. Thus, quazepam may have less side effects than other benzodiazepines, but, it has a very long half-life of 25 hours, which reduces its benefits as a hypnotic due to likely next day sedation. It also has two active metabolites with half-lives of 28 and 79 hours. Quazepam may also cause less drug tolerance than other benzodiazepines such as temazepam and triazolam, perhaps due to its subtype selectivity. The longer half-life of quazepam may have the advantage, however, of causing less rebound insomnia than shorter-acting subtype selective nonbenzodiazepines. However, one of the major metabolites of quazepam, the N-desmethyl-2-oxoquazepam (aka N-desalkylflurazepam), binds unselectively to both type1 and type2 GABAA receptors. The N-desmethyl-2-oxoquazepam metabolite also has a very long half-life and likely contributes to the pharmacological effects of quazepam.
Pharmacokinetics
Quazepam has an absorption half-life of 0.4 hours with a peak in plasma levels after 1.75 hours. It is eliminated both renally and through feces. The active metabolites of quazepam are 2-oxoquazepam and N-desalkyl-2-oxoquazepam. The N-desalkyl-2-oxoquazepam metabolite has only limited pharmacological activity compared to the parent compound quazepam and the active metabolite 2-oxoquazepam. Quazepam and its major active metabolite 2-oxoquazepam both show high selectivity for the type1 GABAA receptors. The elimination half-life range of quazepam is between 27 and 41 hours.
Mechanism of action
Quazepam modulates specific GABAA receptors via the benzodiazepine site on the GABAA receptor. This modulation enhances the actions of GABA, causing an increase in the opening frequency of the chloride ion channel, which results in an increased influx of chloride ions into the GABAA receptors. Quazepam, unique amongst benzodiazepine drugs, selectively targets type 1 benzodiazepine receptors, which results in reduced sleep latency and promotion of sleep. Quazepam also has some anticonvulsant properties.
EEG and sleep
Quazepam has potent sleep-inducing and sleep-maintaining properties. Studies in both animals and humans have demonstrated that EEG changes induced by quazepam resemble normal sleep patterns, whereas other benzodiazepines disrupt normal sleep. Quazepam promotes slow-wave sleep. This positive effect of quazepam on sleep architecture may be due to its high selectivity for type 1 benzodiazepine receptors, as demonstrated in animal and human studies. This makes quazepam unique in the benzodiazepine family of drugs.
Drug misuse
Quazepam is a drug with the potential for misuse. Two types of drug misuse can occur: either recreational misuse, where the drug is taken to achieve a high, or when the drug is continued long term against medical advice.
References
Benzodiazepines
Sedatives
Hypnotics
Anticonvulsants
Muscle relaxants
Chloroarenes
2-Fluorophenyl compounds
Thioamides
Trifluoromethyl compounds | Quazepam | [
"Chemistry",
"Biology"
] | 2,343 | [
"Hypnotics",
"Behavior",
"Sleep",
"Functional groups",
"Thioamides"
] |
19,033,652 | https://en.wikipedia.org/wiki/CA%2015-3 | CA 15-3, for Carcinoma Antigen 15-3, is a tumor marker for many types of cancer, most notably breast cancer.
It is derived from MUC1. CA 15-3 and associated CA 27-29 are different epitopes on the same protein antigen product of the breast cancer-associated MUC1 gene.
Elevated CA15-3, in conjunction with alkaline phosphatase (ALP), was found to be associated with an increased chance of early recurrence in breast cancer.
Both CA 15-3 and CA 27-29 may be elevated in patients with benign ovarian cysts, benign breast disease, and benign liver disease. Elevations may also be seen in cirrhosis, sarcoidosis and lupus.
References
Tumor markers | CA 15-3 | [
"Chemistry",
"Biology"
] | 163 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
11,415,750 | https://en.wikipedia.org/wiki/Diethylaminosulfur%20trifluoride | Diethylaminosulfur trifluoride (DAST) is the organosulfur compound with the formula Et2NSF3. This liquid is a fluorinating reagent used for the synthesis of organofluorine compounds. The compound is colourless; older samples assume an orange colour.
Use in organic synthesis
DAST converts alcohols to the corresponding alkyl fluorides as well as aldehydes and unhindered ketones to geminal difluorides. Carboxylic acids react no further than the acyl fluoride. Sulfur tetrafluoride, SF4, effects the same transformation but will also convert the acyl fluoride to the trifluoromethyl derivative. For laboratory-scale operations, DAST is used in preference to SF4, which is far less expensive but less easily handled. A slightly thermally more stable compound is morpho-DAST. Acid-labile substrates are less likely to undergo rearrangement and elimination since DAST is less prone to contamination with acids. Reaction temperatures are milder as well – alcohols typically react at −78 °C and ketones around 0 °C.
Synthesis
DAST is prepared by the reaction of diethylaminotrimethylsilane and sulfur tetrafluoride:
Et2NSiMe3 + SF4 → Et2NSF3 + Me3SiF
The original paper calls for trichlorofluoromethane (Freon-11) as a solvent. Diethyl ether is a green alternative that can be used with no decrease in yield. Because of the dangers involved in the preparation of DAST (glass etching, possibility of exothermic events), it is often purchased from a commercial source. At one time Carbolabs was one of the few suppliers of the chemical but a number of companies now sell DAST. Carbolabs was acquired by Sigma-Aldrich in 1998.
Safety and alternative reagents
Upon heating, DAST converts to the highly explosive (NEt2)2SF2 with expulsion of sulfur tetrafluoride. To minimize accidents, samples are maintained below 50 °C. Bis-(2-methoxyethyl)aminosulfur trifluoride (trade name: Deoxo-Fluor) and difluoro(morpholino)sulfonium tetrafluoroborate (trade name: XtalFluor-M) are reagents related to DAST with less explosive potential.
XtalFluor-E has been jointly developed by OmegaChem Inc. and Manchester Organics Ltd. in 2009–2010.
See also
Ishikawa reagent
Fluorination with aminosulfuranes
References
Organofluorides
Reagents for organic chemistry
Fluorinating agents
Diethylamino compounds
Organosulfur compounds
Sulfur–nitrogen compounds | Diethylaminosulfur trifluoride | [
"Chemistry"
] | 600 | [
"Organic compounds",
"Fluorinating agents",
"Organosulfur compounds",
"Reagents for organic chemistry"
] |
11,415,890 | https://en.wikipedia.org/wiki/Edmonds%20matrix | In graph theory, the Edmonds matrix of a balanced bipartite graph with sets of vertices and is defined by
where the xij are indeterminates. One application of the Edmonds matrix of a bipartite graph is that the graph admits a perfect matching if and only if the polynomial det(Aij) in the xij is not identically zero. Furthermore, the number of perfect matchings is equal to the number of monomials in the polynomial det(A), and is also equal to the permanent of . In addition, the rank of is equal to the maximum matching size of .
The Edmonds matrix is named after Jack Edmonds. The Tutte matrix is a generalisation to non-bipartite graphs.
References
Algebraic graph theory | Edmonds matrix | [
"Mathematics"
] | 157 | [
"Graph theory stubs",
"Graph theory",
"Mathematical relations",
"Algebra",
"Algebraic graph theory"
] |
11,419,890 | https://en.wikipedia.org/wiki/PYTHIA | PYTHIA is a computer simulation program for predicting events at very high energies in particle accelerators.
History
PYTHIA was originally written in FORTRAN 77, until the 2007 release of PYTHIA 8.1 which was rewritten in C++. Both the Fortran and C++ versions were maintained until 2012 because not all components had been merged into the 8.1 version. However, the latest version already includes new features not available in the Fortran release. PYTHIA is developed and maintained by an international collaboration of physicists, consisting of Christian Bierlich, Nishita Desai, Leif Gellersen, Ilkka Helenius, Philip Ilten, Leif Lönnblad, Stephen Mrenna, Stefan Prestel, Christian Preuss, Torbjörn Sjöstrand, Peter Skands, Marius Utheim and Rob Verheyen.
Features
The following is a list of some of the features PYTHIA is capable of simulating:
Hard and soft interactions
Parton distributions
Initial/final-state parton showers
Multiparton interactions
Fragmentation and decay
See also
Particle physics
Particle decay
References
Further reading
External links
The official PYTHIA page
Monte Carlo particle physics software
Physics software
Software that was rewritten in C++ | PYTHIA | [
"Physics"
] | 259 | [
"Computational physics",
"Particle physics",
"Particle physics stubs",
"Computational physics stubs",
"Physics software"
] |
11,420,641 | https://en.wikipedia.org/wiki/5.8S%20ribosomal%20RNA | In molecular biology, the 5.8S ribosomal RNA (5.8S rRNA) is a non-coding RNA component of the large subunit of the eukaryotic ribosome and so plays an important role in protein translation. It is transcribed by RNA polymerase I as part of the 45S precursor that also contains 18S and 28S rRNA. Its function is thought to be in ribosome translocation. It is also known to form covalent linkage to the p53 tumour suppressor protein. 5.8S rRNA can be used as a reference gene for miRNA detection. The 5.8S ribosomal RNA is used to better understand other rRNA processes and pathways in the cell.
The 5.8S rRNA is homologous to the 5' end of non-eukaryotic LSU rRNA. In eukaryotes, the insertion of ITS2 breaks LSU rRNA into 5.8S and 28S rRNAs. Some flies have their 5.8 rRNA further split into two pieces.
Structure
L567.5 rRNA structure is approximately 150 nucleotides in size and it consists of plenty of folded strands, some of which are presumed to be single stranded.
This ribosomal RNA, along with the 28S and 5S rRNA as well as 46 ribosomal proteins, forms the ribosomal large subunit (LSU).
The 5.8S rRNA is initially transcribed along with the 18S and 28S rRNA in the 45S preribosomal RNA, along with the ITS 1 and ITS 2 (Internal transcribed spacer) and a 5’ and 3’ ETS (External transcribed spacer). The 5.8S rRNA is located between the two ITS regions, with ITS1 separating it from the 18S rRNA in the 5' direction, and ITS2 separating it from the 28S rRNA in the 3' direction. The ITS and ETS are cleaved away during rRNA maturation. This is accomplished through a continuous cleavage pathway performed by both endonuclease and exonuclease enzymes, cutting the spacers at specific locations.
References
External links
Arabidopsis 5.8S rRNA sequence
Rice 5.8S rRNA sequence
Ribosomal RNA | 5.8S ribosomal RNA | [
"Chemistry"
] | 473 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,420,921 | https://en.wikipedia.org/wiki/Hepatitis%20delta%20virus%20ribozyme | The hepatitis delta virus (HDV) ribozyme is a non-coding RNA found in the hepatitis delta virus that is necessary for viral replication. Hepatitis delta virus is the only known human virus that utilizes ribozyme activity to infect its host. The ribozyme acts to process the RNA transcripts to unit lengths in a self-cleavage reaction during replication of the hepatitis delta virus, which is thought to propagate by a double rolling circle mechanism. The ribozyme is active in vivo in the absence of any protein factors and was the fastest known naturally occurring self-cleaving RNA at the time of its discovery.
The crystal structure of this ribozyme has been solved using X-ray crystallography and shows five helical segments connected by a double pseudoknot.
In addition to the sense (genomic version), all HDV viruses also have an antigenomic version of the HDV ribozyme. This version is not the exact complementary sequence but adopts the same structure as the sense (genomic) strand. The only "significant" differences between the two are a small bulge in P4 stem and a shorter J4/2 junction. Both the genomic and antigenomic ribozymes are necessary for replication.
HDV-like ribozymes
The HDV ribozyme is structurally and biochemically related to many other self-cleaving ribozymes. These other ribozymes are often referred to as examples of HDV ribozymes, because of these similarities, even though they are not found in hepatitis delta viruses. They can also be referred to as "HDV-like" to indicate this fact.
HDV-like ribozymes include the mammalian CPEB3 ribozyme, theta ribozymes in bacteriophages, retrotransposons members (e.g. in the R2 RNA element in insects and in the L1Tc and probably other retrotransposons in trypanosomatids) and sequences from bacteria. The grouping is probably a result of convergent evolution: Deltavirus found outside of humans also possess a DV ribozyme, and no horizontal gene transfer scenarios proposed can yet explain this.
Mechanism of catalysis
The HDV ribozyme catalyzes cleavage of the phosphodiester bond between the substrate nucleotide or oligonucleotide and the 5′-hydroxyl of the ribozyme. In the hepatitis delta virus, this substrate nucleotide sequence begins with uridine and is known as U(-1), however, the identity of the -1 nucleotide does not significantly change the rate of catalysis. There is only a requirement for its chemical nature, since as shown by Perrotta and Been, substitution of the U(-1) ribose with deoxyribose abolishes the reaction, which is consistent with the prediction that the 2′-hydroxyl is the nucleophile in the chemical reaction. Hence, unlike many other ribozymes, such as the hammerhead ribozyme, the HDV ribozyme has no upstream requirements for catalysis and requires only a single -1 ribonucleotide as a substrate to efficiently react.
Initially, it was believed that the 75th nucleotide in the ribozyme, a cytosine known as C75, was able to act as a general base with the N3 of C75 abstracting a proton from the 2′-hydroxyl of the U(-1) nucleotide to facilitate nucleophilic attack on the phosphodiester bond. However, although it is well established that the N3 of C75 has a pKa perturbed from its normal value of 4.45 and is closer to about 6.15 or 6.40, it is not neutral enough to act as a general base catalyst. Instead, the N3 of C75 is believed to act as a Lewis acid to stabilize the leaving 5′-hydroxyl of the ribozyme; this is supported by its proximity to the 5′-hydroxyl in the crystal structure. Substitution of the C75 nucleotide with any other nucleotide abolishes or substantially impairs ribozyme activity, although this activity can be partially restored with imidazole, further implicating C75 in catalytic activity.
The C75 in the HDV ribozyme has been the subject of several studies because of its peculiar pKa. The typical pKa values for the free nucleosides are around 3.5 to 4.2; these lower pKa values are acidic and it is unlikely that they would become basic. However, it is likely that the structural environment within the ribozyme, which includes a desolvated active site cleft, provides negative electrostatic potential that could perturb the pKa of cytosine enough to act as a Lewis acid. In addition to Lewis acid stabilization of the 5′-hydroxyl leaving group, it is also now accepted that the HDV ribozyme can use a metal ion to assist in activation of the 2′-hydroxyl for attack on the U(-1) nucleotide. A magnesium ion in the active site of the ribozyme is coordinated to the 2’-hydroxyl nucleophile and an oxygen of the scissile phosphate, and may act as a Lewis acid to activate the 2′-hydroxyl. In addition, it is possible that the phosphate of U23 can act as a Lewis acid to accept a proton from the 2′-hydroxyl with the magnesium serving as a coordinating ion. Because the HDV ribozyme does not require metal ions to have activity, it is not an obligate metalloenzyme, but the presence of magnesium in the active site significantly improves the cleavage reaction. The HDV ribozyme does seem to have a nonspecific requirement for low amounts of divalent cations to fold, being active in Mg2+, Ca2+, Mn2+, and Sr2+. In the absence of metal ions, it seems likely that water can replace the role of magnesium as a Lewis acid.
Regulation by upstream RNA
As limited by the rapid self-cleaving nature of HDV ribozyme, the previous ribonuclease experiments were performed on the 3′ product of self-cleavage rather than the precursor. However, flanking sequence is known to participate in regulating the self-cleavage activity of HDV ribozyme. Therefore, the upstream sequence 5′ to the self-cleavage site has been incorporated to study the resultant self-cleavage activity of the HDV ribozyme. Two alternative structures have been identified.
The first inhibitory structure is folded by an extended transcript (i.e. -30/99 transcript, coordinates are referenced against the self-cleavage site) spanning from 30 nt upstream of the cleavage site to 15 nt downstream of the 3′-end. The flanking sequence sequesters the ribozyme in a kinetic trap during transcription and results in the extremely diminished self-cleavage rate. This self-cleavage-preventing structure includes 3 alternative stems: Alt1, Alt2 and Alt3, which disrupt the active conformation. Alt1 is a 10-bp Long-Range-Interaction formed by an inhibitory upstream stretch (-25/-15 nt) and the downstream stretch (76/86 nt). The Alt1 disrupts the stem P2 in the active conformation wherein P2 is proposed to have an activating role for both genomic and antigenomic ribozyme. Alt2 is an interaction between upstream flanking sequence and the ribozyme, and Alt3 is a nonnative ribozyme-ribozyme interaction.
The secondary structure of this inhibitory conformation is supported by various experimental approaches. First, direct probing via ribonucleases was performed and the subsequent modeling via mfold 3.0 using constraints from the probing results agrees with the proposed structure. Second, a series of DNA oligomer complementary to different regions of AS1/2 were used to rescue the ribozyme activity; the results confirms the inhibitory roles of AS1/2. Third, mutational analysis introduces single/double mutations outside the ribozyme to ensure the observed ribozyme activity is directly associated with the stability of the Alt1. The stability of AS1 is found to be inversely related to the self-cleavage activity.
The second permissive structure enables the HDV ribozyme to self-cleave co-transcriptionally and this structure further includes the -54/-18 nt portion of the RNA transcript. The upstream inhibitory -24/-15 stretch from the aforementioned inhibitory conformation is now sequestered in a hairpin P(-1) located upstream of the cleavage site. The P(-1) motif, however, is only found in the genomic sequence, which may be correlated with the phenomenon that genomic HDV RNA copies are more abundant in the infected liver cells. Experimental evidence also supports this alternative structure. First, structural mapping via ribonuclease is used to probe the -54/-1 fragment instead of the whole precursor transcript due to the fast-cleaving nature of this structure, which agrees with the local hairpin P(-1) (between -54/-40 and -18/-30 nt). Secondly, evolutionary conservation is found in P(-1) and the linking region between P(-1) and P1 among 21 genomic HDV RNA isolates.
Use in RNA transcript preparation
The special properties of the HDV ribozyme's cleavage reaction make it a useful tool to prepare RNA transcripts with homogenous 3′ ends, an alternative to transcription of RNA with T7 RNA polymerase than can often produce heterogenous ends or undesired additions. The cDNA version of the ribozyme may be prepared adjacent to cDNA of the target RNA sequence and RNA prepared from transcription with T7 RNA polymerase. The ribozyme sequence will efficiently cleave itself with no downstream requirements, as the -1 nucleotide is invariant, leaving a 2′–3′ cyclic phosphate that can easily be removed by treatment with a phosphatase or T4 polynucleotide kinase. The target RNA can then be purified with gel purification.
References
External links
Subviral RNA database entry for HDV ribozyme
Non-coding RNA
Ribozymes
RNA splicing | Hepatitis delta virus ribozyme | [
"Chemistry"
] | 2,180 | [
"Catalysis",
"Ribozymes"
] |
11,420,925 | https://en.wikipedia.org/wiki/Hepatitis%20E%20virus%20cis-reactive%20element | The hepatitis E virus cis-reactive element is an RNA element that is believed to be essential for "some step in gene expression". The mutation of this element resulted in hepatitis E strains which were unable to infect rhesus macaques (Macaca mulatta).
References
External links
Cis-regulatory RNA elements
Hepeviridae | Hepatitis E virus cis-reactive element | [
"Chemistry"
] | 71 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,042 | https://en.wikipedia.org/wiki/Pyrococcus%20C/D%20box%20small%20nucleolar%20RNA | In molecular biology, Pyrococcus C/D box small nucleolar RNA are non-coding RNA (ncRNA) molecules identified in the archaeal genus Pyrococcus which function in the modification of ribosomal RNA (rRNA) and transfer RNA (tRNA). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell, which is a major site of ribosomal RNA and snRNA biogenesis, but there is no corresponding visible structure in archaeal cells. This group of ncRNAs are known as small nucleolar RNAs (snoRNA) and also often referred to as a guide RNAs because they direct associated protein enzymes to add a modification to specific nucleotides in target RNAs. C/D box RNAs guide the addition of a methyl group (-CH3) to the 2'-O position in the RNA backbone.
Computational screens of archaeal genomes have identified C/D box snoRNAs in a number of genomes. In particular 46 small RNAs were identified to be conserved in the genomes of three hyperthermophile Pyrococcus species.
References
External links
snoRNAdb
Small nuclear RNA | Pyrococcus C/D box small nucleolar RNA | [
"Chemistry"
] | 254 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,126 | https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor%20II%20IRES | The insulin-like growth factor II (IGF-II) internal ribosome entry site IRES is found in the 5' UTR of IGF-II leader 2 mRNA. This RNA element allows cap-independent translation of the mRNA and it is thought that this family may facilitate a continuous IGF-II production in rapidly dividing cells during development. Ribosomal scanning on human insulin-like growth factor II (IGF-II) is hard to comprehend due to one open reading frame and the ability for the hormone to fold into a stable structure.
References
External links
IRESite page for IGF2 leader2
Cis-regulatory RNA elements | Insulin-like growth factor II IRES | [
"Chemistry"
] | 132 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,139 | https://en.wikipedia.org/wiki/IS102%20RNA | The IS102 RNA is a non-coding RNA that is found in bacteria such as Shigella flexneri and Escherichia coli. The RNA is 208 nucleotides in length and found between the yeeP and flu genes. This RNA was identified in a computational screen of E. coli. The function of this RNA is unknown.
See also
IS061 RNA
IS128 RNA
References
External links
Non-coding RNA | IS102 RNA | [
"Chemistry"
] | 92 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,151 | https://en.wikipedia.org/wiki/Kaposi%27s%20sarcoma-associated%20herpesvirus%20internal%20ribosome%20entry%20site%20%28IRES%29 | This family represents the Kaposi's sarcoma-associated herpesvirus (KSHV) internal ribosome entry site (IRES) present in the vCyclin gene. The vCyclin and vFLIP coding sequences are present on a bicistronic transcript and it is thought the IRES may initiate translation of vFLIP from this bicistronic transcript.
References
External links
Cis-regulatory RNA elements | Kaposi's sarcoma-associated herpesvirus internal ribosome entry site (IRES) | [
"Chemistry"
] | 91 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,162 | https://en.wikipedia.org/wiki/Lysine%20riboswitch | The Lysine riboswitch is a metabolite binding RNA element found within certain messenger RNAs that serve as a precision sensor for the amino acid lysine. Allosteric rearrangement of mRNA structure is mediated by ligand binding, and this results in modulation of gene expression. Lysine riboswitch are most abundant in Bacillota and Gammaproteobacteria where they are found upstream of a number of genes involved in lysine biosynthesis, transport and catabolism. The lysine riboswitch has also been identified independently and called the L box.
The lysine riboswitch controls metabolic pathways of lysine biosynthesis. In particular the metabolic flux of the tricarboxylic acid (TCA) cycle is effectively controlled by the riboswitch. Controlling metabolic flux is imperative for the development of microorganisms in cell growth, and the use of the lysine riboswitch in its applicable bacterium allows for the use of more effective strategies to accomplish control. It is more effective in comparison to various expensive and difficult methods such as utilizing a gene knockout. With lysine as an intracellular signal, the riboswitch regulates gene expression in response to specific metabolites. The lysine riboswitch was first investigated in Bacilus subtilis, located at the 5’UTR of the lysC gene coding for aspartokinase. It has since been found in E. coli (ECRS) with the ability to inhibit translation of aspartokinase III in E. coli and accelerate mRNA decay. In both E. coli and Bacilus subtilis, the lysine riboswitch controls the production of citrate synthase, and therefore metabolic flux in the TCA cycle as decreases in citrate synthase activity contributes to increases in lysine production. Control of TCA cycle activity thus affects the biosynthesis of lysine indicating a higher metabolic flux into the lysine synthesis pathway. It has no inhibitory effect on transcription except for in Bacilus subtilis. The ligand binding domain of the riboswitch binds to L Lysine.
Structure
The structure of the lysine riboswitch has recently been determined. The lysine amino acid is bound in the pocket formed by the 5-way junction. The structure is composed of a three helical bundle and a two helical bundle joined by the 5-way junction. Helices 1 and 2 are stacked in a colinear fashion as are helices 4 and 5.
See also
Glycine riboswitch
Glutamine riboswitch
References
External links
PDB entry for the lysine riboswitch tertiary structure
Cis-regulatory RNA elements
Riboswitch | Lysine riboswitch | [
"Chemistry"
] | 591 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,326 | https://en.wikipedia.org/wiki/Mnt%20IRES | The Mnt internal ribosome entry site (IRES) is an RNA element. Mnt is a transcriptional repressor related to the Myc/Mad family of transcription factors. It is thought that this IRES allows efficient Mnt synthesis when cap-dependent translation initiation is reduced.
See also
N-myc IRES
Tobamovirus IRES
TrkB IRES
References
External links
Cis-regulatory RNA elements | Mnt IRES | [
"Chemistry"
] | 89 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,332 | https://en.wikipedia.org/wiki/N-myc%20internal%20ribosome%20entry%20site%20%28IRES%29 | The N-myc internal ribosome entry site (IRES) is an RNA element found in the n-myc gene. The myc family of genes when expressed are known to be involved in the control of cell growth, differentiation and apoptosis. n-myc mRNA has an alternative method of translation via an internal ribosome entry site where ribosomes are recruited to the IRES located in the 5' UTR thus bypassing the typical eukaryotic cap-dependent translation pathway.
See also
Mnt IRES
Tobamovirus IRES
TrkB IRES
References
External links
Cis-regulatory RNA elements | N-myc internal ribosome entry site (IRES) | [
"Chemistry"
] | 132 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,344 | https://en.wikipedia.org/wiki/Pestivirus%20internal%20ribosome%20entry%20site%20%28IRES%29 | This family represents the internal ribosome entry site (IRES) of the pestiviruses. The pestivirus IRES allows cap and end-independent translation of mRNA in the host cell. The IRES achieves this by mediating the internal initiation of translation by recruiting a ribosomal 43S pre-initiation complex directly to the initiation codon and eliminates the requirement for the eukaryotic initiation factor, eIF4F. The classical swine fever virus UTR described appears to be longer at the 5' end than other pestivirus UTRs. This family represents the conserved core.
References
External links
Cis-regulatory RNA elements
Internal ribosome entry site | Pestivirus internal ribosome entry site (IRES) | [
"Chemistry"
] | 140 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,347 | https://en.wikipedia.org/wiki/Picornavirus%20internal%20ribosome%20entry%20site%20%28IRES%29 | This family represents the Picornavirus internal ribosome entry site (IRES) element present in their 5' untranslated region. These elements were discovered in picornaviruses. They are cis-acting RNA sequences that adopt diverse three-dimensional structures, recruit the translation machinery and that often operate in association with specific RNA-binding proteins. IRES elements allow cap and end-independent translation of mRNA in the host cell. It has been found that La autoantigen (La) is required for Coxsackievirus B3 (CVB3) IRES-mediated translation, and it has been suggested that La may be required for the efficient translation of the viral RNA in the pancreas. Based on their secondary structure, picornavirus IRES are grouped into four main types. Type I comprises enteroand rhinovirus IRES and type II, those of cardio- and aphthovirus, among others. Type III is used to name hepatitis A IRES.
References
External links
http://rfam.xfam.org/family/RF00229 Picornavirus internal ribosome entry site (IRES)
Cis-regulatory RNA elements
Internal ribosome entry site
Picornaviridae | Picornavirus internal ribosome entry site (IRES) | [
"Chemistry"
] | 258 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,411 | https://en.wikipedia.org/wiki/Ribosomal%20protein%20L10%20leader | This family is a putative ribosomal protein leader autoregulatory structure found in B. subtilis and other low-GC Gram-positive bacteria. It is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins L10 and L12 (rplJ-rplL). A Rho-independent transcription terminator structure that is probably involved in regulation is included at the 3′ end.
Other ribosomal protein leaders identified in the same study include those of L13, L19, L20 and L21.
References
External links
Ribosomal protein leader | Ribosomal protein L10 leader | [
"Chemistry"
] | 124 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,418 | https://en.wikipedia.org/wiki/Ribosomal%20protein%20L19%20leader | L19 Ribosomal protein leaders are part of the ribosome biogenesis. They are used as an autoregulatory mechanism to control the concentration of ribosomal proteins L19, and are located in the 5′ untranslated regions of mRNAs encoding ribosomal protein L19 (rplS).
L19 ribosomal protein leaders have been bioinformatically predicted in B. subtilis and other low-GC Gram-positive bacteria in the phylum Bacillota.
More examples that share a similar structure were predicted in Flavobacteria, also using bioinformatic approaches.
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | Ribosomal protein L19 leader | [
"Chemistry"
] | 143 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,431 | https://en.wikipedia.org/wiki/RNAI | RNAI is a non-coding RNA that is an antisense repressor of the replication of some E. coli plasmids, including ColE1. Plasmid replication is usually initiated by RNAII, which acts as a primer by binding to its template DNA. The complementary RNAI binds RNAII prohibiting it from its initiation role. The rate of degradation of RNAI is therefore a major factor in the control of plasmid replication. This rate of degradation is aided by the pcnB (plasmid copy number B) gene product, which polyadenylates the 3' end of RNAI targeting it for degradation by PNPase.
References
Further reading
External links
Antisense RNA | RNAI | [
"Chemistry"
] | 149 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,446 | https://en.wikipedia.org/wiki/RNase%20E%205%E2%80%B2%20UTR%20element | In molecular biology, the RNase E 5′ UTR element is a cis-acting element located in the 5′ UTR of ribonuclease (RNase) E messenger RNA (mRNA).
RNase E is a key regulatory enzyme in the pathway of mRNA degradation in Escherichia coli. It is able to auto-regulate the degradation of its own mRNA in response to changes in RNase E activity. This rne 5′ UTR element acts as a sensor of cellular RNase E concentration enabling tight regulation of RNase E concentration and synthesis.
See also
Degradosome
References
External links
Cis-regulatory RNA elements | RNase E 5′ UTR element | [
"Chemistry"
] | 132 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,453 | https://en.wikipedia.org/wiki/Rotavirus%20cis-acting%20replication%20element | This family represents a rotavirus cis-acting replication element (CRE) found at the 3'-end of rotavirus mRNAs. The family is thought to promote the synthesis of minus strand RNA to form viral dsRNA.
References
External links
Cis-regulatory RNA elements | Rotavirus cis-acting replication element | [
"Chemistry"
] | 58 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,460 | https://en.wikipedia.org/wiki/RsmY%20RNA%20family | The rsmY RNA family is a set of related non-coding RNA genes, that like RsmZ, is regulated by the GacS/GacA signal transduction system in the plant-beneficial soil bacterium and biocontrol model organism Pseudomonas fluorescens CHA0. GacA/GacS target genes are translationally repressed by the small RNA binding protein RsmA. RsmY and RsmZ RNAs bind RsmA to relieve this repression and so enhance secondary metabolism and biocontrol traits.
Studies in Legionella pneumophila have shown that the ncRNAs RsmY and RsmZ together with the proteins LetA and CsrA are involved in a regulatory cascade. Also, it appears that these ncRNAs are regulated by RpoS sigma-factor.
See also
CsrB/RsmB RNA family
CsrC RNA family
PrrB/RsmZ RNA family
RsmX
RsmW sRNA
CsrA protein
References
Further reading
External links
Pfam page for the CsrA protein family
Non-coding RNA | RsmY RNA family | [
"Chemistry"
] | 215 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,475 | https://en.wikipedia.org/wiki/RydC%20RNA | RydC is a bacterial non-coding RNA. RydC is thought to regulate a mRNA, yejABEF, which encodes an ABC transporter protein. RydC is known to bind the Hfq protein, which causes a conformational change in the RNA molecule. The Hfq/RydC complex is then thought to bind to the target mRNA and induce its degradation.
See also
RyfA RNA
RydB RNA
RybB RNA
References
Further reading
External links
Non-coding RNA | RydC RNA | [
"Chemistry"
] | 101 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,508 | https://en.wikipedia.org/wiki/SgrS%20RNA | SgrS (sugar transport-related sRNA, previously named ryaA) is a 227 nucleotide small RNA that is activated by SgrR in Escherichia coli during glucose-phosphate stress. The nature of glucose-phosphate stress is not fully understood, but is correlated with intracellular accumulation of glucose-6-phosphate. SgrS helps cells recover from glucose-phosphate stress by base pairing with ptsG mRNA (encoding the glucose transporter) and causing its degradation in an RNase E dependent manner. Base pairing between SgrS and ptsG mRNA also requires Hfq, an RNA chaperone frequently required by small RNAs that affect their targets through base pairing. The inability of cells expressing sgrS to create new glucose transporters leads to less glucose uptake and reduced levels of glucose-6-phosphate. SgrS is an unusual small RNA in that it also encodes a 43 amino acid functional polypeptide, SgrT, which helps cells recover from glucose-phosphate stress by preventing glucose uptake. The activity of SgrT does not affect the levels of ptsG mRNA of PtsG protein. It has been proposed that SgrT exerts its effects through regulation of the glucose transporter, PtsG.
SgrS was originally discovered in E. coli but homologues have since been identified in other Gammaproteobacteria such as Salmonella enterica and members of the genus Citrobacter. A comparative genomics based target prediction approach that employs these homologs, has been developed and was used to predict the SgrS target, ptsI (b2416), which was subsequently verified experimentally.
References
Further reading
External links
sRNAmap page for SgrS RNA
Vanderpool Lab
Aiba Lab
Non-coding RNA | SgrS RNA | [
"Chemistry"
] | 359 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
11,421,646 | https://en.wikipedia.org/wiki/Prime%20k-tuple | In number theory, a prime -tuple is a finite collection of values representing a repeatable pattern of differences between prime numbers. For a -tuple , the positions where the -tuple matches a pattern in the prime numbers are given by the set of integers such that all of the values are prime. Typically the first value in the -tuple is 0 and the rest are distinct positive even numbers.
Named patterns
Several of the shortest k-tuples are known by other common names:
OEIS sequence covers 7-tuples (prime septuplets) and contains an overview of related sequences, e.g. the three sequences corresponding to the three admissible 8-tuples (prime octuplets), and the union of all 8-tuples. The first term in these sequences corresponds to the first prime in the smallest prime constellation shown below.
Admissibility
In order for a -tuple to have infinitely many positions at which all of its values are prime, there cannot exist a prime such that the tuple includes every different possible value modulo . If such a prime existed, then no matter which value of was chosen, one of the values formed by adding to the tuple would be divisible by , so the only possible placements would have to include itself, and there are at most of those. For example, the numbers in a -tuple cannot take on all three values 0, 1, and 2 modulo 3; otherwise the resulting numbers would always include a multiple of 3 and therefore could not all be prime unless one of the numbers is 3 itself.
A -tuple that includes every possible residue modulo is said to be inadmissible modulo . It should be obvious that this is only possible when . A tuple which is not inadmissible modulo any prime is called admissible.
It is conjectured that every admissible -tuple matches infinitely many positions in the sequence of prime numbers. However, there is no tuple for which this has been proven except the trivial 1-tuple (0). In that case, the conjecture is equivalent to the statement that there are infinitely many primes. Nevertheless, Yitang Zhang proved in 2013 that there exists at least one 2-tuple which matches infinitely many positions; subsequent work showed that such a 2-tuple exists with values differing by 246 or less that matches infinitely many positions.
Positions matched by inadmissible patterns
Although is inadmissible modulo 3, it does produce the single set of primes, .
Because 3 is the first odd prime, a non-trivial () -tuple matching the prime 3 can only match in one position. If the tuple begins (i.e. is inadmissible modulo 2) then it can only match if the tuple contains only even numbers, it can only match
Inadmissible -tuples can have more than one all-prime solution if they are admissible modulo 2 and 3, and inadmissible modulo a larger prime . This of course implies that there must be at least five numbers in the tuple. The shortest inadmissible tuple with more than one solution is the 5-tuple , which has two solutions: and , where all values mod 5 are included in both cases. Examples with three or more solutions also exist.
Prime constellations
The diameter of a -tuple is the difference of its largest and smallest elements. An admissible prime -tuple with the smallest possible diameter (among all admissible -tuples) is a prime constellation. For all this will always produce consecutive primes. (Recall that all are integers for which the values are prime.)
This means that, for large :
where is the th prime number.
The first few prime constellations are:
The diameter as a function of is sequence A008407 in the OEIS.
A prime constellation is sometimes referred to as a prime -tuplet, but some authors reserve that term for instances that are not part of longer -tuplets.
The first Hardy–Littlewood conjecture predicts that the asymptotic frequency of any prime constellation can be calculated. While the conjecture is unproven it is considered likely to be true. If that is the case, it implies that the second Hardy–Littlewood conjecture, in contrast, is false.
Prime arithmetic progressions
A prime -tuple of the form is said to be a prime arithmetic progression. In order for such a -tuple to meet the admissibility test, must be a multiple of the primorial of .
Skewes numbers
The Skewes numbers for prime k-tuples are an extension of the definition of Skewes' number to prime k-tuples based on the first Hardy–Littlewood conjecture (). Let denote a prime -tuple, the number of primes below such that are all prime, let and let denote its Hardy–Littlewood constant (see first Hardy–Littlewood conjecture). Then the first prime that violates the Hardy–Littlewood inequality for the -tuple , i.e., such that
(if such a prime exists) is the Skewes number for .
The table below shows the currently known Skewes numbers for prime k-tuples:
The Skewes number (if it exists) for sexy primes is still unknown.
References
.
.
Prime numbers | Prime k-tuple | [
"Mathematics"
] | 1,106 | [
"Prime numbers",
"Mathematical objects",
"Numbers",
"Number theory"
] |
16,806,575 | https://en.wikipedia.org/wiki/Lactonase | Lactonase (EC 3.1.1.81, acyl-homoserine lactonase; systematic name N-acyl-L-homoserine-lactone lactonohydrolase) is a metalloenzyme, produced by certain species of bacteria, which targets and inactivates acylated homoserine lactones (AHLs). It catalyzes the reaction
an N-acyl-L-homoserine lactone + H2O an N-acyl-L-homoserine
Many species of α-, β-, and γ-proteobacteria produce acylated homoserine lactones, small hormone-like molecules commonly used as communication signals between bacterial cells in a population to regulate certain gene expression and phenotypic behaviours. This type of gene regulation is known as quorum sensing.
Other names for these types of enzymes are Quorum-quenching N-acyl-homoserine lactonase, acyl homoserine degrading enzyme, acyl-homoserine lactone acylase, AHL lactonase, AHL-degrading enzyme, AHL-inactivating enzyme, AHLase, AhlD, AhlK, AiiA, AiiA lactonase, AiiA-like protein, AiiB, AiiC, AttM, delactonase, lactonase-like enzyme, N-acyl homoserine lactonase, N-acyl homoserine lactone hydrolase, N-acyl-homoserine lactone lactonase, N-acyl-L-homoserine lactone hydrolase, quorum-quenching lactonase, quorum-quenching N-acyl homoserine lactone hydrolase.
Enzyme mechanism
Lactonase hydrolyzes the ester bond of the homoserine lactone ring of acylated homoserine lactones. In hydrolysing the lactone bond, lactonase prevents these signaling molecules from binding to their target transcriptional regulators, thus inhibiting quorum sensing.
Enzyme Structure
A dinuclear zinc binding site is conserved in all known lactonases and essential for enzyme activity and protein folding.
Zn1 is tetracoordinated by His104, His106, His169, and the bridging hydroxide ion. Zn2 has five ligands, including Asp191, His235, His109, Asp108, and the bridging hydroxide ion. The metal ions assist in polarizing the lactone bond, increasing the electrophilicity of the lactone ring’s carbonyl carbon. Isotopic labeling studies indicated that the ring opening occurs via an addition elimination reaction with water addition shown below.
Biological Function
Lactonases are able to interfere with AHL-mediated quorum sensing. Some examples of these lactonases are AiiA produced by Bacillus species, AttM and AiiB produced by Agrobacterium tumefaciens, and QIcA produced by Hyphomicrobiales species.
Lactonases have been reported for Bacillus, Agrobacterium, Rhodococcus, Streptomyces, Arthrobacter, Pseudomonas, and Klebsiella. The Bacillus cereus group (consisting of B. cereus, B. thuringiensis, B. mycoides, and B. anthracis) was found to contain nine genes homologous to the AiiA gene that encode AHL-inactivating enzymes, with the catalytic zinc-binding motif conserved in all cases.
In the phytopathogen A. tumefaciens, AiiB lactonase acts as a fine modulator that essentially delays the release of lactone OC8-HSL and the resultant number of tumors produced by the pathogen. AttM lactonase mediates the degradation of the lactone OC8-HSL in wounded plant tissues.
The primary activity of the anti-atherosclerotic paraoxonase (PON) enzymes is as lactonase. Oxidized polyunsaturated fatty acids (notably in oxidized low-density lipoprotein) form lactone-like structures that are PON substrates.
Ecology
It is still unclear the ecological effects of lactonase but it has been proposed that since bacteria mostly coexist with other microorganisms in the environment, some bacteria strains could have evolved its feeding strategies and utilize AHLs as their main resource for energy and nitrogen.
Applications
Understanding the mechanisms and purposes of lactonase activity could lead to potential applied roles for these lactonases to control bacterial infections by inhibiting quorum-sensing activity and bring about profound effects on human health and the environment. However, in both the chemical and enzymatic lactonolysis, the reaction is reversible, complicating direct therapeutic application of lactonases.
Pseudomonas aeruginosa, is an AHL-producing bacteria an opportunistic pathogen that infects immuno-compromised patients, and is found in lung infections of cystic fibrosis patients. P. aeruginosa relies on quorum sensing via production of lactones N-butanoyl-L-homoserine (C4-HSL) and N-(3-oxododecanoyl)-l-HSL (3-oxo-C12-HSL) to regulate swarming, toxin and protease production, and proper biofilm formation. The absence of one or more components of the quorum-sensing system results in a significant reduction in virulence of the pathogen.
Erwinia carotovora is a plant pathogen that causes soft rot in a number of crops such as potatoes and carrots by using N-hexanoyl-l-HSL (C6-HSL) quorum sensing to evade the plant's defense systems and coordinate its production of pectate lyase during the infection process.
Plants expressing AHL-Lactonase were shown to demonstrate enhanced resistance to infection from the pathogen Erwinia carotovora. Expression of virulence genes in E. Carotovora is regulated by N-(3-oxohexanoyl)-L-homoserine lactone (OHHL). Presumably, OHHL-hydrolysis via lactonase reduced OHHL levels, inhibiting the quorum-sensing systems driving virulence gene expression.
See also
1,4-lactonase
2-pyrone-4,6-dicarboxylate lactonase
3-oxoadipate enol-lactonase
Actinomycin lactonase
Deoxylimonate A-ring-lactonase
Gluconolactonase
L-rhamnono-1,4-lactonase
Limonin-D-ring-lactonase
Steroid-lactonase
Triacetate-lactonase
Xylono-1,4-lactonase
References
Biomolecules
Enzymes | Lactonase | [
"Chemistry",
"Biology"
] | 1,484 | [
"Natural products",
"Organic compounds",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
16,808,988 | https://en.wikipedia.org/wiki/ERDLator | The ERDLator was a field water treatment device developed during World War II at the U.S. Army's United States Army Engineer Research and Development Laboratory (ERDL) at Ft. Belvoir, Virginia.
Technically named the "Water Purification Unit, Van-Type, Body Mounted, Electric Motor Driven", the laboratory's acronym was incorporated into the name of the purification device itself, creating the name by which it was most widely known, ERDLator, pronounced "erda-later".
The device was introduced into the field as a van-type body-mounted mobile unit, and proved vitally important to the operational effectiveness of deployed units under harsh field conditions, providing not only the water needed for survival, but clean potable water for staying healthy. This passage from a description of the United States Marine Corps Engineer Battalion illustrates the ERDLator's significance:
"One of the most important units was the water supply platoon. This platoon operated...water purification plants called Erdalators that could remove silt and suspended matter, filter, and purify even contaminated stream water. Producing from 1-3,000 gallons (about 4,000 to 12,000 liters) per day -- the larger number was achieved using separate large rubberized settling tanks -- one unit could adequately supply an infantry battalion under adverse conditions"
ERDLator units of several production capacities were used for field water purification during the Vietnam War.
The unit was replaced in 1979 by the Reverse Osmosis Water Purification Unit (ROWPU) ("row-pew"), also developed by a U.S. Army laboratory. Developing water purification systems for military use has led to technological breakthroughs which have benefited the civilian community; the ERDLator was evaluated for technology transfer to the civilian sector for the decontamination of water polluted with asbestos fibers.
References
Military technology
Water treatment | ERDLator | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 386 | [
"Water treatment",
"Environmental engineering",
"Water technology",
"Water pollution"
] |
16,809,626 | https://en.wikipedia.org/wiki/Glucosidases | Glucosidases are the glycoside hydrolase enzymes categorized under the EC number 3.2.1.
Function
Alpha-glucosidases are enzymes involved in breaking down complex carbohydrates such as starch and glycogen into their monomers.
They catalyze the cleavage of individual glucosyl residues from various glycoconjugates including alpha- or beta-linked polymers of glucose. This enzyme convert complex sugars into simpler ones.
Members
Different sources include different members in this class. Members marked with a "#" are considered by MeSH to be glucosidases.
Clinical significance
Alpha-glucosidases are targeted by alpha-glucosidase inhibitors such as acarbose and miglitol to control diabetes mellitus type 2.
See also
DNA glycosylases
Mucopolysaccharidoses
References
External links
Hydrolases
Carbohydrates
EC 3.2 | Glucosidases | [
"Chemistry"
] | 203 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Carbohydrates",
"Carbohydrate chemistry"
] |
16,809,925 | https://en.wikipedia.org/wiki/Hydraulically%20activated%20pipeline%20pigging | Hydraulically activated pipeline pigging (HAPP) is a pigging technology applied for pipeline cleaning. The basic principle is that a pressure drop is created over a by-passable pig held back against a pipeline's fluid flow. The pipeline fluid passing through the pig's cleaning head is accelerated by this pressure drop, forming strong cleaning jets. These jets are directed onto the inner wall in front of the pig, removing all kinds of deposits.
Introduction to pipeline pigging
Pipeline pigs are devices that are inserted into and travel throughout the length of a pipeline driven by the product flow. They were originally developed to remove deposits which could obstruct or retard flow through a pipeline (Fig. 1). Today pigs are used during all phases in the life of a pipeline for many different reasons.
Pigs used today can be divided into three categories (Fig. 2):
Utility Pigs, are used to perform functions such as cleaning, separating, or dewatering.
In Line Inspection (ILI) Tools, provide information on the condition of the line, as well as the extent and location of any problems.
Gel Pigs, are used in conjunction with conventional pigs to optimize pipeline dewatering, cleaning, and drying tasks.
Generally for cleaning pigs, the cleaning force applied is the mechanical force between the pipe inner wall and the cleaning pig itself. This force is determined by the pig travel speed as well as by the hardness and shape of the cleaning edge: The faster the pig, the higher the cleaning impact on the deposits, but at the same time only the surface of the debris is scratched away. Therefore, several, sometimes many, pig runs are required to clean a pipeline.
Hydraulically activated pigs apply high pressure liquid jets either supplied by high pressure hoses (depended) or made available by the kinetic energy locally available. Depended hydraulically activated pigs are limited in reach due to the hose which needs to be inserted into the pipeline and guides the cleaning head.
HAPP principle
A hydraulically activated pig consists of three units (Fig. 3): a brake unit, a seal unit and the cleaning head.
All units have openings that allow the entire fluid flow through the pipeline to bypass. The brake unit ensures that a hydraulically activated pig is held back against the fluid flow in the pipeline. The fluid pushes against the following seal unit, which channels it into the openings of the cleaning head. The seal unit and cleaning head restrict the flow, resulting in a pressure difference across the pig. Thus, the fluid is accelerated in the cleaning head's nozzles, creating extremely powerful liquid jets. These jets are directed onto the pipe inner wall to remove any kind of deposits.
The brake unit ensures that the travel speed of the pig is much slower than the fluid velocity, thus allowing it to entirely remove deposits from the pipe wall before it travels across the cleaned surface. The deposits removed are immediately flushed down the pipeline with the main jet of the cleaning head which is directed to the middle of the pipeline. With all deposits removed from the pipe wall and transported downstream by the fluid flow there remains no risk of the pig getting stuck in debris accumulated in front of it.
References
Pigging Technology Review, page 36, World Pipelines Volume 9, Number 1 - January 2009
STOLTZE, Björn - Jet Power, World Pipelines Volume 8, Number 6 - June 2008
STOLTZE, Björn - A new pipeline cleaning technology: the hydraulically-activated power pig, Global Pipeline Monthly Volume 3, issue #11 - December 2007
STOLTZE, Björn - A new pipeline cleaning technology: Hydraulically activated power pigging (HAPP), presented at the Pigging Products & Services Association (PPSA) Seminar in Aberdeen, Scotland - November 2007
External links
Official provider website
Pigging Products & Services Association PPSA website
PPSA Seminar Paper
Global Pipeline Monthly GPM Volume 3, issue #11 - December 2007
Petroleum production
Pipeline transport
Pigging | Hydraulically activated pipeline pigging | [
"Engineering"
] | 794 | [
"Pigging",
"Petroleum engineering"
] |
16,811,582 | https://en.wikipedia.org/wiki/Roll%20program | A roll program or tilt maneuver is an aerodynamic maneuver that alters the attitude of a vertically launched space launch vehicle. It consists of a partial rotation around the vehicle's vertical axis, allowing the vehicle to then pitch to follow the proper azimuth toward orbit.
A roll program is usually completed after the vehicle clears the tower. In the case of many NASA crewed launches, the commander reports the roll to the mission control center which is then acknowledged by the capsule communicator.
Saturn V
The Saturn V's roll program was initiated shortly after launch and was handled by the first stage. It was open-loop: the commands were pre-programmed to occur at a specific time after lift-off, and no closed loop control was used. This made the program simpler to design at the expense of not being able to correct for unforeseen conditions such as high winds. The rocket simply initiated its roll program at the appropriate time after launch, and rolled until an adequate amount of time had passed to ensure that the desired roll angle was achieved.
Roll on the Saturn V was initiated by tilting the engines simultaneously using the roll and pitch servomechanisms, which served to initiate a rolling torque on the vehicle.
Space Shuttle
During the launch of a Space Shuttle, the roll program was simultaneously accompanied by a pitch maneuver and yaw maneuver.
The roll program occurred during a Shuttle launch for the following reasons:
To place the shuttle in a heads down position
Increasing the mass that can be carried into orbit (this was actually the initial reason - a 20% payload increase due to more efficient aerodynamics and moment balancing between the boosters and main engines)
Increasing the orbital altitude
Simplifying the trajectory of a possible Return to Launch site abort maneuver
Improving radio line-of-sight propagation
Orienting the shuttle more parallel toward the ground with the nose to the east
The RAGMOP computer program (Northrop) in 1971–72 discovered a ~20% payload increase by rolling upside down. It went from ~40,000 lb to ~48,000 lb to a 150 NM equatorial orbit without violating any constraints (max Q, 3 G limit, etc.). So the incentive to roll was initially for the payload increase by minimizing drag losses and moment balancing losses by keeping the main engine thrust vectors more parallel to the SRBs.
References
Spaceflight
Rocketry
Aerodynamics | Roll program | [
"Chemistry",
"Astronomy",
"Engineering"
] | 480 | [
"Outer space",
"Rocketry stubs",
"Astronomy stubs",
"Aerodynamics",
"Rocketry",
"Aerospace engineering",
"Spaceflight",
"Fluid dynamics"
] |
7,713,778 | https://en.wikipedia.org/wiki/Settlement%20%28structural%29 | Settlement is the downward movement or the sinking of a structure's foundation. It is mostly caused by changes in the underlying soil, such as drying and shrinking, wetting and softening, or compression due to the soil being poorly compacted when construction started.
Some settlement is quite normal after construction has been completed.
Unequal settlement or differential settlement is non-uniform settlement. It may cause significant problems for buildings. Distortion or disruption of parts of a building may occur due to
unequal compression of its foundations;
shrinkage, such as that which occurs in timber-framed buildings as the frame adjusts its moisture content; or
undue loads being applied to the building after its initial construction.
Settlement should not be confused with subsidence which results from the load-bearing ground upon which a building sits reducing in level, for instance in areas of mine workings where shafts collapse underground.
Traditional green oak-framed buildings are designed to settle with time as the oak seasons and warps, lime mortar rather than Portland cement is used for its elastic properties and glazing will often employ small leaded lights which can accept movement more readily than larger panes.
Measurement of settlements
The magnitude of settlements can be measured using different techniques such as:
Surveying
Settlement cells
Tiltmeters
Borehole extensometers
Full-profile gauge
Settlement platforms
See also
Soil consolidation
References
Building defects
Foundations (buildings and structures)
Structural engineering | Settlement (structural) | [
"Materials_science",
"Engineering"
] | 281 | [
"Structural engineering",
"Foundations (buildings and structures)",
"Construction",
"Civil engineering",
"Building defects",
"Mechanical failure"
] |
7,714,070 | https://en.wikipedia.org/wiki/Acoustic%20transmission | Acoustic transmission is the transmission of sounds through and between materials, including air, wall, and musical instruments.
The degree to which sound is transferred between two materials depends on how well their acoustical impedances match.
In musical instrument design
Musical instruments are generally designed to radiate sound effectively. A high-impedance part of the instrument, such as a string, transmits vibrations through a bridge (intermediate impedance) to a sound board (lower impedance). The soundboard then moves the still lower-impedance air. Without bridge and soundboard, the instrument does not transmit enough sound to the air, and is too quiet to be performed with. An electric guitar has no soundboard; it uses a microphone pick-up and artificial amplification. Without amplification, electric guitars are very quiet.
Stethoscope
Stethoscopes roughly match the acoustical impedance of the human body, so they transmit sounds from a patient's chest to the doctor's ear much more effectively than the air does. Putting an ear to someone's chest would have a similar effect.
Building acoustics
Acoustic transmission in building design refers to a number of processes by which sound can be transferred from one part of a building to another. Typically these are:
Airborne transmission - a noise source in one room sends air pressure waves which induce vibration to one side of a wall or element of structure setting it moving such that the other face of the wall vibrates in an adjacent room. Structural isolation therefore becomes an important consideration in the acoustic design of buildings. Highly sensitive areas of buildings, for example recording studios, may be almost entirely isolated from the rest of a structure by constructing the studios as effective boxes supported by springs. Air tightness also becomes an important control technique. A tightly sealed door might have reasonable sound reduction properties, but if it is left open only a few millimeters its effectiveness is reduced to practically nothing. The most important acoustic control method is adding mass into the structure, such as a heavy dividing wall, which will usually reduce airborne sound transmission better than a light one.
Impact transmission - a noise source in one room results from an impact of an object onto a separating surface, such as a floor and transmits the sound to an adjacent room. A typical example would be the sound of footsteps in a room being heard in a room below. Acoustic control measures usually include attempts to isolate the source of the impact, or cushioning it. For example, carpets will perform significantly better than hard floors.
Flanking transmission - a more complex form of noise transmission, where the resultant vibrations from a noise source are transmitted to other rooms of the building usually by elements of structure within the building. For example, in a steel framed building, once the frame itself is set into motion the effective transmission can be pronounced.
References
Carl Hopkins. Sound insulation. Elsevier. Imprint: Butterworth-Heinemann. 2007. .
Tomas Ficker. Handbook of building thermal technology, acoustics and daylighting. CERM. 2004
See also
Acoustic absorption
Acoustic attenuation
Architectural acoustics
Attenuation coefficient
Noise pollution
Soundproofing
Sound pressure
Sound reflection
Sound transmission class
Building engineering
Acoustics
Music technology
Sound | Acoustic transmission | [
"Physics",
"Engineering"
] | 645 | [
"Building engineering",
"Classical mechanics",
"Acoustics",
"Civil engineering",
"Architecture"
] |
7,714,496 | https://en.wikipedia.org/wiki/Diffuse%20element%20method | In numerical analysis the diffuse element method (DEM) or simply diffuse approximation is a meshfree method.
The diffuse element method was developed by B. Nayroles, G. Touzot and Pierre Villon at the Universite de Technologie de Compiegne, in 1992.
It is in concept rather similar to the much older smoothed particle hydrodynamics. In the paper they describe a "diffuse approximation method", a method for function approximation from a given set of points.
In fact the method boils down to the well-known moving least squares for the particular case of a global approximation (using all available data points). Using this function approximation method, partial differential equations and thus fluid dynamic problems can be solved. For this, they coined the term diffuse element method (DEM).
Advantages over finite element methods are that DEM doesn't rely on a grid, and is more precise in the evaluation of the derivatives of the reconstructed functions.
See also
Computational fluid dynamics
References
Generalizing the finite element method: diffuse approximation and diffuse elements, B Nayroles, G Touzot. Pierre Villon, P, Computational Mechanics Volume 10, pp 307-318, 1992
Numerical differential equations
Computational fluid dynamics | Diffuse element method | [
"Physics",
"Chemistry"
] | 245 | [
"Computational physics stubs",
"Computational fluid dynamics",
"Computational physics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
7,715,335 | https://en.wikipedia.org/wiki/Harpoon%20reaction | A harpoon reaction is a type of chemical reaction, first proposed by Michael Polanyi in 1920, whose mechanism (also called the harpooning mechanism) involves two neutral reactants undergoing an electron transfer over a relatively long distance to form ions that then attract each other closer together. For example, a metal atom and a halogen might react to form a cation and anion, respectively, leading to a combined metal halide.
The main feature of these redox reactions is that, unlike most reactions, they have steric factors greater than unity; that is, they take place faster than predicted by collision theory. This is explained by the fact that the colliding particles have greater cross sections than the pure geometrical ones calculated from their radii, because when the particles are close enough, an electron "jumps" (therefore the name) from one of the particles to the other one, forming an anion and a cation which subsequently attract each other. Harpoon reactions usually take place in the gas phase, but they are also possible in condensed media.
The predicted rate constant can be improved by using a better estimation of the steric factor. A rough approximation is that the largest separation Rx at which charge transfer can take place on energetic grounds, can be estimated from the solution of the following equation that determines the largest distance at which the Coulombic attraction between the two oppositely charged ions is sufficient to provide the energy .
With , where is the ionization potential of the metal and is the electron affinity of the halogen.
Examples of harpoon reactions
Generically: Rg + X2 + hν → RgX + X, where Rg is a rare gas and X is a halogen
Ba...FCH3 + hν → BaF(*) + CH3
K + CH3I → KI + CH3
References
Chemical kinetics
Chemical reactions
Reaction mechanisms | Harpoon reaction | [
"Chemistry"
] | 388 | [
"Reaction mechanisms",
"Chemical reaction engineering",
"nan",
"Physical organic chemistry",
"Chemical kinetics"
] |
7,716,099 | https://en.wikipedia.org/wiki/Alkaline%20tide | Alkaline tide (mal del puerco) refers to a condition, normally encountered after eating a meal, where during the production of hydrochloric acid by the parietal cells in the stomach, the parietal cells secrete bicarbonate ions across their basolateral membranes and into the blood, causing a temporary increase in blood pH.
During hydrochloric acid secretion in the stomach, the gastric parietal cells extract chloride anions, carbon dioxide, water and sodium cations from the blood plasma and in turn release bicarbonate back into the plasma after forming it from carbon dioxide and water constituents. This is to maintain the plasma's electrical balance, as the chloride anions have been extracted. The bicarbonate content causes the venous blood leaving the stomach to be more alkaline than the arterial blood delivered to it.
The alkaline tide is neutralised by a secretion of H+ into the blood during HCO3− secretion in the pancreas.
Postprandial (i.e., after a meal) alkaline tide lasts until the acids in food absorbed in the small intestine reunite with the bicarbonate that was produced when the food was in the stomach. Thus, alkaline tide is self-limited and normally lasts less than two hours.
Postprandial alkaline tide has also been shown to be a causative agent of calcium oxalate urinary stones in cats, and potentially in other species.
A more pronounced alkaline tide results from vomiting, which stimulates hyperactivity of gastric parietal cells to replace lost stomach acid. Thus, protracted vomiting can result in metabolic alkalosis.
References
Digestive system
Metabolism
Blood | Alkaline tide | [
"Chemistry",
"Biology"
] | 362 | [
"Digestive system",
"Organ systems",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.