id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
14,161,673 | https://en.wikipedia.org/wiki/Harvestman%20phylogeny | Harvestmen (Opiliones) are an order of arachnids often confused with spiders, though the two orders are not closely related. Research on harvestman phylogeny (that is, the phylogenetic tree) is in a state of flux. While some families are clearly monophyletic, that is share a common ancestor, others are not, and the relationships between families are often not well understood.
Position in Arachnida
The relationship of harvestmen with other arachnid orders is still not sufficiently resolved.
Up until the 1980s they were thought to be closely related to mites (Acari). In 1990, Shultz proposed grouping them with scorpions, pseudoscorpions and Solifugae ("camel spiders"); he named this clade Dromopoda. This view is currently widely accepted. However, the relationships of the orders within Dromopoda are not yet sufficiently resolved. Analyses of recent taxa suggested the harvestmen to be the sister group of the three others, collectively called Novogenuata. An analysis also considering fossil taxa concluded that the harvestmen are sister to Haplocnemata (Pseudoscorpions and Solifugae), with Scorpions being the sister group of those three combined. Recent analyses have also recovered the Opiliones as sister-group to the extinct Phalangiotarbids, although this has low support, or as sister group to a pseudoscorpion and scorpion clade.
Relationship of suborders
In 1796, Pierre André Latreille erected the family "Phalangida" for the then known harvestmen, but included the genus Galeodes (Solifugae). Tord Tamerlan Teodor Thorell (1892) recognized the suborders Palpatores, Laniatores, Cyphophthalmi (called Anepignathi), but also included the Ricinulei as a harvestman suborder. The latter were removed from the Opiliones by Hansen and William Sørensen (1904), rendering the harvestmen monophyletic.
According to more recent theories, Cyphophthalmi, the most basal suborder, are a sister group to all other harvestmen, which are according to this system called Phalangida. The Phalangida consist of three suborders, the Eupnoi, Dyspnoi and Laniatores. While these three are each monophyletic, it is not clear how exactly they are related. In 2002, Giribet et al. came to the conclusion that Dyspnoi and Laniatores are sister groups, and called them Dyspnolaniatores, which are sister to Eupnoi. This is in contrast to the classical hypothesis that Dyspnoi and Eupnoi form a clade called Palpatores. Dyspnolaniatores was also recovered in a 2011 study.
In 2014, new analysis by Garwood et al. examined 158 morphological traits across 272 species. In Garwood's phylogenetic tree, the basal Opiliones split into the Phalangida and stem Cyphophthalmi. The Cyphophthalmi stem then diversified into Cyphophthalmi proper and the newly identified Tetrophthalmi, while the Phalangida split into Laniatores and the "Palpatores". Finally, the Palpatores diversified into Eupnoi and Dyspnoi. The analysis moves divergence of the extant suborders from the Devonian Period to the Carboniferous. Opiliones' own divergence is dated to 414 million years ago, which arachnid are estimated to have originated during the late Cambrian to early Ordivician.
Genetic analysis performed on a modern Phalangium opilio specimen found that a suppressed gene that, if active, would generate a second pair of eyes at the lateral location, providing independent evidence of four eyes being the ancestral condition. Garwood et al. also argue that Carboniferous harvestmen diversification is more consistent with changes observed in other terrestrial arthropods, which have been linked to high oxygen levels during that period.
Relationship within suborders
Cyphophthalmi
The Cyphophthalmi have been divided into two infraorders, Temperophthalmi (including the superfamily Sironoidea, with the families Sironidae, Troglosironidae and Pettalidae) and Tropicophthalmi (with the superfamilies Stylocelloidea and its single family Stylocellidae, and Ogoveoidea, including Ogoveidae and Neogoveidae); however, recent studies suggest that the Sironidae, Neogoveidae and Ogoveidae are not monophyletic, while the Pettalidae and Stylocellidae are. The division into Temperophthalmi and Tropicophthalmi is not supported, with Troglosironidae and Neogoveidae probably forming a monophyletic group. The Pettalidae are possibly the sister group to all other Cyphophthalmi.
While most Cyphophthalmi are blind, eyes do occur in several groups. Many Stylocellidae, and some Pettalidae bear eyes near or at the base of the ozophores, as opposed to most harvestmen, which have eyes located on top. The eyes of Stylocellidae could have evolved from the lateral eyes of other arachnids, which have been lost in all other harvestmen. Regardless of their origin, it is thought that eyes were lost several times in Cyphophthalmi.
Spermatophores, which normally do not occur in harvestmen, but in several other arachnids, are present in some Sironidae and Stylocellidae.
Eupnoi
The Eupnoi are divided into two superfamilies, the Caddoidea and Phalangioidea. The Phalangioidea are assumed to be monophyletic, although only the families Phalangiidae and Sclerosomatidae have been studied; the Caddoidea have not been studied at all in this regard. The limits of families and subfamilies in Eupnoi are uncertain in many cases, and are in urgent need of further study.
Dyspnoi
The Dyspnoi are probably the best studied harvestman group regarding phylogeny. They are clearly monophyletic, and divided into two superfamilies. The relationship of the superfamily Ischyropsalidoidea, comprising the families Ceratolasmatidae, Ischyropsalididae and Sabaconidae, has been investigated in detail. It is not clear whether Ceratolasmatidae and Sabaconidae are each monophyletic, as the ceratolasmatid Hesperonemastoma groups with the sabaconid Taracus in molecular analyses. All other families are grouped under Troguloidea.
Laniatores
There is not yet a proposed phylogeny for the whole group of Laniatores, although some families have been researched in this regard. The Laniatores are divided into two infraorders, the "Insidiatores" Loman, 1900 and the Grassatores Kury, 2002. However, Insidiatores is probably paraphyletic. It consists of the two superfamilies Travunioidea and Triaenonychoidea, with the latter closer to the Grassatores. Alternatively, the Pentanychidae, which reside in Travunioidea, could be the sister group to all other Laniatores.
The Grassatores are traditionally divided into the Samooidea, Assamioidea, Gonyleptoidea, Phalangodoidea and Zalmoxoidea. Several of these groups are not monophyletic. Molecular analyses relying on nuclear ribosomal genes support monophyly of Gonyleptidae, Cosmetidae (both Gonyleptoidea), Stygnopsidae (currently Assamioidea) and Phalangodidae. The Phalangodidae and Oncopodidae may not form a monophyletic group, thus rendering the Phalangodoidea obsolete. The families of the obsolete Assamioidea have been moved to other groups: Assamiidae and Stygnopsidae are now Gonyleptoidea, Epedanidae reside within their own superfamily Epedanoidea, and the "Pyramidopidae" are possibly related to Phalangodidae.
References
External links
Phylogenetics | Harvestman phylogeny | [
"Biology"
] | 1,829 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
3,342,004 | https://en.wikipedia.org/wiki/Glory%20hole | A glory hole (also spelled gloryhole and glory-hole) is a hole in a wall or partition, often between public lavatory cubicles or sex video arcade booths and lounges, for people to engage in sexual activity or to observe the person on the opposite side.
Glory holes are especially associated with gay male culture and anal or oral sex. They are not exclusively favoured by gay people, and have become more commonly acknowledged as a fetish for heterosexual and bisexual individuals.
In more recent years, public glory holes have faded in popularity in many countries, though some gay websites offer directories of remaining ones. Glory holes are sometimes a topic of erotic literature, and pornographic films have been devoted to their use.
Motivations
Numerous motivations can be ascribed to the use and eroticism of glory holes. As a wall separates the two participants, they have no contact except for a penis and a mouth, hand, anus, or vagina. Almost total anonymity is maintained, as no other attributes are taken into consideration. The glory hole is seen as an erotic oasis in gay subcultures around the world; people's motives, experiences and attributions of value in its use are varied.
In light of the ongoing HIV pandemic, many gay men reevaluated their sexual and erotic desires and practices. Queer theorist Tim Dean has suggested that glory holes allow for a physical barrier, which may be an extension of psychological barriers, in which there is internalized homophobia (a result of many societies' reluctance to discuss LGBT practices and people). For some gay men, a glory hole depersonalizes their partner altogether as a disembodied object of sexual desire.
History
The first documented instance of a glory hole was in a 1707 court case known as the "Tryals of Thomas Vaughan and Thomas Davis" in London, which involved the extortion of a man known in the documents only as Mr Guillam. At the time, gay sex in public places could lead to arrests by members of the Society for the Reformation of Manners. Often the authorities would wait outside the Lincoln's Inn bog house in London as one place to catch people.
The courts heard that a man (Mr Guillam) had visited a lavatory stall to relieve himself, when another male put his penis through a hole in the wall ("a Boy in the adjoyning Vault put his Privy-member through a Hole"). Mr Guillam, surprised by the action, fled the lavatory, only to be followed by the male who cried out that he would have had sex with him. Mr Guillam was then confronted by Mr Vaughan who, knowing Mr Guillam's innocence, threatened to turn him in to the police and reveal him to his wife if he did not pay him a sum of money.
During the mid-1900s, police often used bathroom glory holes as an entrapment method for gay men, often recording the incidents as evidence to prosecute. Such incidents were recorded in California and Ohio in the 1950s and 1960s, with archival police footage of "tearooms" appearing on pornography websites such as Pornhub.
According to the Routledge Dictionary of Modern American Slang, "glory hole" first appeared in print in 1949, when an anonymously published glossary, Swasarnt Nerf's Gay Girl's Guide, defined it as "[a] phallic size hole in partition between toilet booths. Sometimes used also for a mere peep-hole."
Another reference to glory holes appeared in Tearoom Trade: Impersonal Sex in Public Places, a controversial book published by sociologist Laud Humphreys in 1970, where he suggests the "tearoom", or bathroom stall, as a prime space for men to congregate for sexual fulfilment. It also appeared later in the 1977 book The Joy of Gay Sex.
Public glory holes started to fade in popularity as the decriminalization of homosexuality was introduced in many countries, and concerns over HIV/AIDS changed gay culture. A 2001 study in the Journal of Homosexuality found that public glory holes remained popular among many gay men "simply because they find [them] exciting and/or convenient."
Despite the fading prominence of glory holes in public, some gay bath houses and sex clubs maintain the presence of glory holes in their establishments, and some people have acknowledged installing private glory-hole walls in their own homes. Bathroom sex remains a fetish for a subset of gay men in particular, who engage in similarly anonymous acts below a bathroom stall separator rather than through a hole.
In 2018, the Western Australian Museum added a "historic glory hole" to its collection. It had been situated in the toilet stall of the Albany Highway-side of the Gosnells train station, but was removed and saved in 1997 before the toilet was demolished.
The Leather Archives & Museum was loaned a glory hole from Man’s Country in Chicago in June 2019.
A 2020 BuzzFeed article collected anecdotes from gay, straight and bisexual readers recounting their experiences with glory holes at swinger parties.
Legal and health concerns
Public sex of any kind is illegal in many parts of the world, and police undercover operations continue to be used to enforce such laws. Adverse personal consequences to participants in glory hole activity have included police surveillance and public humiliation in the press, often with marital and employment consequences, and imprisonment following a criminal conviction. Gay bashing, mugging and bodily injury are further potential risks. For reasons of personal safety, as well as etiquette, men typically wait for a signal from the receptive partner before inserting their genitals through a glory hole.
Potential health advantage
In June 2020, a New York Health Department COVID-19 advisory suggested sex through "physical barriers, like walls", but did not specifically reference glory holes, as part of broader measures on dating and sex during the pandemic.
About a month later, the British Columbia Centre for Disease Control went a step further with its COVID-19 precautionary recommendations by suggesting using "barriers, like walls (e.g., glory holes), that allow for sexual contact but prevent close face-to-face contact" as one way to lower the risk of exposure to the virus.
In popular culture
Glory holes are a recurring theme in pornography. Straight porn often features scenarios involving them; in some instances, it involves kink mistresses, who see it as a form of women's sexual agency and mastery.
The early 20th-century pornographic cartoon Eveready Harton in Buried Treasure depicts the use of an improvised glory hole for zoophilic purposes.
Jackass Number Two features a stunt where cast member Chris Pontius dresses his penis in a mouse costume and inserts it into a glory hole that feeds into a snake's cage.
In The Illuminatus! Trilogy a glory hole, in the form of a giant golden apple with an opening in it, is used as part of the Discordian initiation ritual, causing the main character to wonder who or what is on the other side.
American glam metal band Steel Panther's album All You Can Eat features a song entitled "Gloryhole", about the narrator's frequent visits to a local gloryhole.
In the "Mac and Charlie Die (Part 1)" episode of the sitcom It's Always Sunny in Philadelphia, the gang discovers a glory hole has been added to the men's bathroom in their bar.
In 2024, comedy duo Rhett and Link gamified the glory hole for their annual Good Mythical Evening livestream. The game included a "Gory Hole" as the event was Halloween themed. The participant was instructed to guess what inanimate object was poking out of the hole while blindfolded and unable to use their hands.
The Lonely Island returned to form with SNL Digital Short “Sushi Glory Hole” on the October 6, 2024 episode of Saturday Night Live.
See also
Cottaging – term referring to anonymous male–male sex in a public lavatory
Gay bathhouse
Gay beat
Gay cruising in England and Wales
Polari
Troll (gay slang)
References
Further reading
"The Little Black Book: This one can keep you out of trouble" (Lambda Legal Defense and Education Fund
"Gloryholes" essay at rotten.com
An article that gives legal advice on cruising for sex.
(Includes several glory hole encounters by Navy members)
(quote from the abstract)
External links
A Sex Stop on the Way Home by Corey Kilgannon, New York Times, September 21, 2005
The Little Black Book: This one can keep you out of trouble , Lambda Legal Defense and Education Fund; archived copy, pdf format, archived here. An article regarding legal issues of sex in public restrooms.
Sexuality and society
Casual sex
Pornography terminology
LGBTQ slang
Toilets
Sexual slang
Discordianism | Glory hole | [
"Biology"
] | 1,803 | [
"Excretion",
"Toilets"
] |
3,343,173 | https://en.wikipedia.org/wiki/Aza-Baylis%E2%80%93Hillman%20reaction | The aza-Baylis–Hillman reaction or aza-BH reaction in organic chemistry is a variation of the Baylis–Hillman reaction and describes the reaction of an electron deficient alkene, usually an α,β-unsaturated carbonyl compound, with an imine in the presence of a nucleophile. The reaction product is an allylic amine. The reaction can be carried out in enantiomeric excess of up to 90% with the aid of bifunctional chiral BINOL and phosphinyl BINOL compounds, for example in the reaction of n-(4-chloro-benzylidene)-benzenesulfonamide with methyl vinyl ketone (MVK) in cyclopentyl methyl ether and toluene at -15°C.
In one study a reaction mechanism for a specific aza-BH reaction is proposed. Given a set of reaction conditions the reaction is found to be first-order in the triphenylphosphine nucleophile, MVK and the tosylimine concentration in the rate determining step in the presence of a Brønsted acid such as phenol or benzoic acid. The presence of an acid facilitates the elimination reaction in the zwitterion by proton transfer which becomes much faster and no longer rate determining. A 6 membered cyclic transition state is proposed for this reaction step. Because this step is also reversible the presence of acid causes a racemisation process simply by mixing chiral aza-BH adduct, phosphine and acid.
Asymmetric aza-BH
Aza-BH reactions are known in asymmetric synthesis by making use of chiral ligands. In one study, for the first time, successful use was made of a chiral solvent based on an ionic liquid (IL).
This solvent is a condensation product of L-(−)-malic acid (available from the chiral pool), boric acid catalyzed by sodium hydroxide. When the sodium counter ion is replaced by a bulky ammonium salt the resulting ionic liquid has a melting point of −32°C.
This IL serves as the chiral solvent for the aza-BH reaction between N-(4-bromobenzylidene)-4-toluenesulfonamide and methyl vinyl ketone catalyzed by triphenylphosphine with chemical yield 34–39% and enantiomeric excess 71–84%.
References
External links
https://www.organic-chemistry.org/Highlights/2006/30JanuaryA.shtm
Addition reactions
Carbon-carbon bond forming reactions
Name reactions | Aza-Baylis–Hillman reaction | [
"Chemistry"
] | 567 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
3,345,023 | https://en.wikipedia.org/wiki/Ballistic%20conduction | In mesoscopic physics, ballistic conduction (ballistic transport) is the unimpeded flow (or transport) of charge carriers (usually electrons), or energy-carrying particles, over relatively long distances in a material. In general, the resistivity of a material exists because an electron, while moving inside a medium, is scattered by impurities, defects, thermal fluctuations of ions in a crystalline solid, or, generally, by any freely-moving atom/molecule composing a gas or liquid. Without scattering, electrons simply obey Newton's second law of motion at non-relativistic speeds.
The mean free path of a particle can be described as the average length that the particle can travel freely, i.e., before a collision, which could change its momentum. The mean free path can be increased by reducing the number of impurities in a crystal or by lowering its temperature. Ballistic transport is observed when the mean free path of the particle is (much) longer than the dimension of the medium through which the particle travels. The particle alters its motion only upon collision with the walls. In the case of a wire suspended in air/vacuum the surface of the wire plays the role of the box reflecting the electrons and preventing them from exiting toward the empty space/open air. This is because there is an energy to be paid to extract the electron from the medium (work function).
Ballistic conduction is typically observed in quasi-1D structures, such as carbon nanotubes or silicon nanowires, because of extreme size quantization effects in these materials. Ballistic conduction is not limited to electrons (or holes) but can also apply to phonons. It is theoretically possible for ballistic conduction to be extended to other quasi-particles, but this has not been experimentally verified. For a specific example, ballistic transport can be observed in a metal nanowire: due to the small size of the wire (nanometer-scale or 10−9 meters scale) and the mean free path which can be longer than that in a metal.
Ballistic conduction differs from superconductivity due to 1) a finite, non-zero resistance and 2) the absence of the Meissner effect in the material. The presence of resistance implies that the heat is dissipated in the leads outside of the "ballistic" conductor, where inelastic scattering effects can take place.
Theory
Scattering mechanisms
In general, carriers will exhibit ballistic conduction when where is the length of the active part of the device (e.g., a channel in a MOSFET). is the mean free path for the carrier which can be given by Matthiessen's rule, written here for electrons:
where
is the electron-electron scattering length,
is the acoustic phonon (emission and absorption) scattering length,
is the optical phonon emission scattering length,
is the optical phonon absorption scattering length,
is the electron-impurity scattering length,
is the electron-defect scattering length,
and is the electron scattering length with the boundary.
In terms of scattering mechanisms, optical phonon emission normally dominates, depending on the material and transport conditions. There are also other scattering mechanisms which apply to different carriers that are not considered here (e.g. remote interface phonon scattering, Umklapp scattering). To get these characteristic scattering rates, one would need to derive a Hamiltonian and solve Fermi's golden rule for the system in question.
Landauer–Büttiker formalism
In 1957, Rolf Landauer proposed that conduction in a 1D system could be viewed as a transmission problem. For the 1D graphene nanoribbon field effect transistor (GNR-FET) on the right (where the channel is assumed to be ballistic), the current from A to B, given by the Boltzmann transport equation, is
,
where gs = 2, due to spin degeneracy, e is the electron charge, h is the Planck constant, and are the Fermi levels of A and B, M(E) is the number of propagating modes in the channel, f′(E) is the deviation from the equilibrium electron distribution (perturbation), and T(E) is the transmission probability (T = 1 for ballistic). Based on the definition of conductance
,
and the voltage separation between the Fermi levels is approximately , it follows that
, with
where M is the number of modes in the transmission channel and spin is included. is known as the conductance quantum. The contacts have a multiplicity of modes due to their larger size in comparison to the channel. Conversely, the quantum confinement in the 1D GNR channel constricts the number of modes to carrier degeneracy and restrictions from the energy dispersion relationship and the Brillouin zone. For example, electrons in carbon nanotubes have two intervalley modes and two spin modes. Since the contacts and the GNR channel are connected by leads, the transmission probability is smaller at contacts A and B,
.
Thus the quantum conductance is approximately the same if measured at A and B or C and D.
The Landauer–Büttiker formalism holds as long as the carriers are coherent (which means the length of the active channel is less than the phase-breaking mean free path) and the transmission functions can be calculated from Schrödinger's equation or approximated by semiclassical approximations, like the WKB approximation. Therefore, even in the case of a perfect ballistic transport, there is a fundamental ballistic conductance which saturates the current of the device with a resistance of approximately 12.9 kΩ per mode (spin degeneracy included). There is, however, a generalization of the Landauer–Büttiker formalism of transport applicable to time-dependent problems in the presence of dissipation.
Importance
Ballistic conduction enables use of quantum mechanical properties of electron wave functions. Ballistic transport is coherent in wave mechanics terms. Phenomena like double-slit interference, spatial resonance (and other optical or microwave-like effects) could be exploited in electronic systems at nanoscale in systems including nanowires and nanotubes.
The widely encountered phenomenon of electrical contact resistance or ECR, arises as an electric current flowing through a rough interface is restricted to a limited number of contact spots. The size and distribution of these contact spots is governed by the topological structures of the contacting surfaces forming the electrical contact. In particular, for surfaces with high fractal dimension contact spots may be very small. In such cases, when the radius of the contact spot is smaller than the mean free path of electrons , the resistance is dominated by the Sharvin mechanism, in which electrons travel ballistically through these micro-contacts with resistance that can be described by the following
This term, where and correspond to the specific resistivity of the two contacting surfaces, is known as Sharvin resistance. Electrical contacts resulting in ballistic electron conduction are known as Sharvin Contacts. When the radius of a contact spot is larger than the mean free path of electrons, the contact resistance can be treated classically.
Optical analogies
A comparison with light provides an analogy between ballistic and non-ballistic conduction.
Ballistic electrons behave like light in a waveguide or a high-quality optical assembly. Non-ballistic electrons behave like light diffused in milk or reflected off a white wall or a piece of paper.
Electrons can be scattered several ways in a conductor. Electrons have several properties: wavelength (energy), direction, phase, and spin orientation. Different materials have different scattering probabilities which cause different incoherence rates (stochasticity). Some kinds of scattering can only cause a change in electron direction, others can cause energy loss.
Consider a coherent source of electrons connected to a conductor. Over a limited distance, the electron wave function will remain coherent. You still can deterministically predict its behavior (and use it for computation theoretically). After some greater distance, scattering causes each electron to have a slightly different phase and/or direction. But there is still almost no energy loss. Like monochromatic light passing through milk, electrons undergo elastic interactions. Information about the state of the electrons at the input is then lost. Transport becomes statistical and stochastic. From the resistance point of view, stochastic (not oriented) movement of electrons is useless even if they carry the same energy – they move thermally. If the electrons undergo inelastic interactions too, they lose energy and the result is a second mechanism of resistance. Electrons which undergo inelastic interaction are then similar to non-monochromatic light.
For correct usage of this analogy consideration of several facts is needed:
photons are bosons and electrons are fermions;
there is coulombic repulsion between electrons thus this analogy is good only for single-electron conduction because electron processes are strongly nonlinear and dependent on other electrons;
it is more likely that an electron would lose more energy than a photon would, because of the electron's non-zero rest mass;
electron interactions with the environment, each other, and other particles are generally stronger than interactions with and between photons.
Examples
As mentioned, nanostructures such as carbon nanotubes or graphene nanoribbons are often considered ballistic, but these devices only very closely resemble ballistic conduction. Their ballisticity is nearly 0.9 at room temperature.
Carbon nanotubes and graphene nanoribbon
The dominant scattering mechanism at room temperature is that of electrons emitting optical phonons. If electrons don't scatter with enough phonons (for example if the scattering rate is low), the mean free path tends to be very long (m). So a nanotube or graphene nanoribbon could be a good ballistic conductor if the electrons in transit don't scatter with too many phonons and if the device is about 100 nm long. Such a transport regime has been found to depend on the nanoribbon edge structure and the electron energy.
Isotopically enriched diamond
Isotopically pure diamond can have a significantly higher thermal conductivity. See List of thermal conductivities.
See also
References
Further reading
Nanoelectronics
Charge carriers
Mesoscopic physics | Ballistic conduction | [
"Physics",
"Materials_science"
] | 2,105 | [
"Physical phenomena",
"Charge carriers",
"Quantum mechanics",
"Electrical phenomena",
"Condensed matter physics",
"Nanoelectronics",
"Nanotechnology",
"Mesoscopic physics"
] |
3,345,298 | https://en.wikipedia.org/wiki/Configuration%20state%20function | In quantum chemistry, a configuration state function (CSF), is a symmetry-adapted linear combination of Slater determinants. A CSF must not be confused with a configuration. In general, one configuration gives rise to several CSFs; all have the same total quantum numbers for spin and spatial parts but differ in their intermediate couplings.
Definition
A configuration state function (CSF), is a symmetry-adapted linear combination of Slater determinants. It is constructed to have the same quantum numbers as the wavefunction, , of the system being studied. In the method of configuration interaction, the wavefunction can be expressed as a linear combination of CSFs, that is in the form
where denotes the set of CSFs. The coefficients, , are found by using the expansion of to compute a Hamiltonian matrix. When this is diagonalized, the eigenvectors are chosen as the expansion coefficients. CSFs rather than just Slater determinants can also be used as a basis in multi-configurational self-consistent field computations.
In atomic structure, a CSF is an eigenstate of
the square of the angular momentum operator,
the z-projection of angular momentum
the square of the spin operator
the z-projection of the spin operator
In linear molecules, does not commute with the Hamiltonian for the system and therefore CSFs are not eigenstates of . However, the z-projection of angular momentum is still a good quantum number and CSFs are constructed to be eigenstates of and . In non-linear (which implies polyatomic) molecules, neither nor commutes with the Hamiltonian. The CSFs are constructed to have the spatial transformation properties of one of the irreducible representations of the point group to which the nuclear framework belongs. This is because the Hamiltonian operator transforms in the same way. and are still valid quantum numbers and CSFs are built to be eigenfunctions of these operators.
From configurations to configuration state functions
CSFs are derived from configurations. A configuration is just an assignment of electrons to orbitals. For example, and are example of two configurations, one from atomic structure and one from molecular structure.
From any given configuration we can, in general, create several CSFs. CSFs are therefore sometimes also called N-particle symmetry adapted basis functions. For a configuration the number of electrons is fixed; let's call this . When we are creating CSFs from a configuration we have to work with the spin-orbitals associated with the configuration.
For example, given the orbital in an atom we know that there are two spin-orbitals associated with this,
where
are the one electron spin-eigenfunctions for spin-up and spin-down respectively. Similarly, for the orbital in a linear molecule ( point group) we have four spin orbitals:
.
This is because the designation corresponds to z-projection of angular momentum of both and .
We can think of the set of spin orbitals as a set of boxes each of size one; let's call this boxes. We distribute the electrons among the boxes in all possible ways. Each assignment corresponds to one Slater determinant, . There can be great number of these, particularly when . Another way to look at this is to say we have entities and we wish to select of them, known as a combination. We need to find all possible combinations. Order of the selection is not significant because we are working with determinants and can interchange rows as required.
If we then specify the overall coupling that we wish to achieve for the configuration, we can now select only those Slater determinants that have the required quantum numbers. In order to achieve the required total spin angular momentum (and in the case of atoms the total orbital angular momentum as well), each Slater determinant has to be premultiplied by a coupling coefficient , derived ultimately from Clebsch–Gordan coefficients. Thus the CSF is a linear combination
.
The Lowdin projection operator formalism may be used to find the coefficients. For any given set of determinants it may be possible to find several different sets of coefficients. Each set corresponds to one CSF. In fact this simply reflects the different internal couplings of total spin and spatial angular momentum.
A genealogical algorithm for CSF construction
At the most fundamental level, a configuration state function can be constructed from a set of orbitals and a number of electrons using the following genealogical algorithm:
distribute the electrons over the set of orbitals giving a configuration
for each orbital the possible quantum number couplings (and therefore wavefunctions for the individual orbitals) are known from basic quantum mechanics; for each orbital choose one of the permitted couplings but leave the z-component of the total spin, undefined.
check that the spatial coupling of all orbitals matches that required for the system wavefunction. For a molecule exhibiting or this is achieved by a simple linear summation of the coupled value for each orbital; for molecules whose nuclear framework transforms according to symmetry, or one of its sub-groups, the group product table has to be used to find the product of the irreducible representation of all orbitals.
couple the total spins of the orbitals from left to right; this means we have to choose a fixed for each orbital.
test the final total spin and its z-projection against the values required for the system wavefunction
The above steps will need to be repeated many times to elucidate the total set of CSFs that can be derived from the electrons and orbitals.
Single orbital configurations and wavefunctions
Basic quantum mechanics defines the possible single orbital wavefunctions. In a software implementation, these can be provided either as a table or through a set of logic statements. Alternatively group theory may be used to compute them.
Electrons in a single orbital are called equivalent electrons. They obey the same coupling rules as other electrons but the Pauli exclusion principle makes certain couplings impossible. The Pauli exclusion principle requires that no two electrons in a system can have all their quantum numbers equal. For equivalent electrons, by definition the principal quantum number is identical. In atoms the angular momentum is also identical. So, for equivalent electrons the z components of spin and spatial parts, taken together, must differ.
The following table shows the possible couplings for a orbital with one or two electrons.
The situation for orbitals in Abelian point groups mirrors the above table. The next table shows the fifteen possible couplings for a orbital.
The orbitals also each generate fifteen possible couplings, all of which can be easily inferred from this table.
Similar tables can be constructed for atomic systems, which transform according to the point group of the sphere, that is for s, p, d, f orbitals. The number of term symbols and therefore possible couplings
is significantly larger in the atomic case.
Computer software for CSF generation
Computer programs are readily available to generate CSFs for atoms for molecules and for electron and positron scattering by molecules. A popular computational method for CSF construction is the Graphical Unitary Group Approach.
References
Quantum chemistry | Configuration state function | [
"Physics",
"Chemistry"
] | 1,449 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
15,272,567 | https://en.wikipedia.org/wiki/Yangian | In representation theory, a Yangian is an infinite-dimensional Hopf algebra, a type of a quantum group. Yangians first appeared in physics in the work of Ludvig Faddeev and his school in the late 1970s and early 1980s concerning the quantum inverse scattering method. The name Yangian was introduced by Vladimir Drinfeld in 1985 in honor of C.N. Yang.
Initially, they were considered a convenient tool to generate the solutions of the quantum Yang–Baxter equation.
The center of the Yangian can be described by the quantum determinant.
The Yangian is a degeneration of the quantum loop algebra (i.e. the quantum affine algebra at vanishing central charge).
Description
For any finite-dimensional semisimple Lie algebra a, Drinfeld defined an infinite-dimensional Hopf algebra Y(a), called the Yangian of a. This Hopf algebra is a deformation of the universal enveloping algebra U(a[z]) of the Lie algebra of polynomial loops of a given by explicit generators and relations. The relations can be encoded by identities involving a rational R-matrix. Replacing it with a trigonometric R-matrix, one arrives at affine quantum groups, defined in the same paper of Drinfeld.
In the case of the general linear Lie algebra glN, the Yangian admits a simpler description in terms of a single ternary (or RTT) relation on the matrix generators due to Faddeev and coauthors.
The Yangian Y(glN) is defined to be the algebra generated by elements with 1 ≤ i, j ≤ N and p ≥ 0, subject to the relations
Defining , setting
and introducing the R-matrix R(z) = I + z−1 P on CNCN,
where P is the operator permuting the tensor factors, the above relations can be written more simply as the ternary relation:
The Yangian becomes a Hopf algebra with comultiplication Δ, counit ε and antipode s given by
At special values of the spectral parameter , the R-matrix degenerates to a rank one projection. This can be used to define the quantum determinant of , which generates the center of the Yangian.
The twisted Yangian Y−(gl2N), introduced by G. I. Olshansky, is the co-ideal generated by the coefficients of
where σ is the involution of gl2N given by
Applications
Classical representation theory
G.I. Olshansky and I.Cherednik discovered that the Yangian of glN is closely related with the branching properties of irreducible finite-dimensional representations of general linear algebras. In particular, the classical Gelfand–Tsetlin construction of a basis in the space of such a representation has a natural interpretation in the language of Yangians, studied by M.Nazarov and V.Tarasov. Olshansky, Nazarov and Molev later discovered a generalization of this theory to other classical Lie algebras, based on the twisted Yangian.
Physics
The Yangian appears as a symmetry group in different models in physics.
Yangian appears as a symmetry group of one-dimensional exactly solvable models such as spin chains, Hubbard model and in models of one-dimensional relativistic quantum field theory.
The most famous occurrence is in planar supersymmetric Yang–Mills theory in four dimensions, where Yangian structures appear on the level of symmetries of operators, and scattering amplitude as was discovered by Drummond, Henn and Plefka.
Representation theory
Irreducible finite-dimensional representations of Yangians were parametrized by Drinfeld in a way similar to the highest weight theory in the representation theory of semisimple Lie algebras. The role of the highest weight is played by a finite set of Drinfeld polynomials. Drinfeld also discovered a generalization of the classical Schur–Weyl duality between representations of general linear and symmetric groups that involves the Yangian of slN and the degenerate affine Hecke algebra (graded Hecke algebra of type A, in George Lusztig's terminology).
Representations of Yangians have been extensively studied, but the theory is still under active development.
See also
Quantum affine algebra
Notes
References
Translated in
Translated in
Yang Chen-Ning
Representation theory
Quantum groups
Exactly solvable models | Yangian | [
"Mathematics"
] | 907 | [
"Representation theory",
"Fields of abstract algebra"
] |
712,450 | https://en.wikipedia.org/wiki/Quantum%20statistical%20mechanics | Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics.
Expectation
From classical probability theory, we know that the expectation of a random variable X is defined by its distribution DX by
assuming, of course, that the random variable is integrable or that the random variable is non-negative. Similarly, let A be an observable of a quantum mechanical system. A is given by a densely defined self-adjoint operator on H. The spectral measure of A defined by
uniquely determines A and conversely, is uniquely determined by A. EA is a Boolean homomorphism from the Borel subsets of R into the lattice Q of self-adjoint projections of H. In analogy with probability theory, given a state S, we introduce the distribution of A under S which is the probability measure defined on the Borel subsets of R by
Similarly, the expected value of A is defined in terms of the probability distribution DA by
Note that this expectation is relative to the mixed state S which is used in the definition of DA.
Remark. For technical reasons, one needs to consider separately the positive and negative parts of A defined by the Borel functional calculus for unbounded operators.
One can easily show:
The trace of an operator A is written as follows:
Note that if S is a pure state corresponding to the vector , then:
Von Neumann entropy
Of particular significance for describing randomness of a state is the von Neumann entropy of S formally defined by
.
Actually, the operator S log2 S is not necessarily trace-class. However, if S is a non-negative self-adjoint operator not of trace class we define Tr(S) = +∞. Also note that any density operator S can be diagonalized, that it can be represented in some orthonormal basis by a (possibly infinite) matrix of the form
and we define
The convention is that , since an event with probability zero should not contribute to the entropy. This value is an extended real number (that is in [0, ∞]) and this is clearly a unitary invariant of S.
Remark. It is indeed possible that H(S) = +∞ for some density operator S. In fact T be the diagonal matrix
T is non-negative trace class and one can show T log2 T is not trace-class.
Theorem. Entropy is a unitary invariant.
In analogy with classical entropy (notice the similarity in the definitions), H(S) measures the amount of randomness in the state S. The more dispersed the eigenvalues are, the larger the system entropy. For a system in which the space H is finite-dimensional, entropy is maximized for the states S which in diagonal form have the representation
For such an S, H(S) = log2 n. The state S is called the maximally mixed state.
Recall that a pure state is one of the form
for ψ a vector of norm 1.
Theorem. H(S) = 0 if and only if S is a pure state.
For S is a pure state if and only if its diagonal form has exactly one non-zero entry which is a 1.
Entropy can be used as a measure of quantum entanglement.
Gibbs canonical ensemble
Consider an ensemble of systems described by a Hamiltonian H with average energy E. If H has pure-point spectrum and the eigenvalues of H go to +∞ sufficiently fast, e−r H will be a non-negative trace-class operator for every positive r.
The Gibbs canonical ensemble is described by the state
Where β is such that the ensemble average of energy satisfies
and
This is called the partition function; it is the quantum mechanical version of the canonical partition function of classical statistical mechanics. The probability that a system chosen at random from the ensemble will be in a state corresponding to energy eigenvalue is
Under certain conditions, the Gibbs canonical ensemble maximizes the von Neumann entropy of the state subject to the energy conservation requirement.
Grand canonical ensemble
For open systems where the energy and numbers of particles may fluctuate, the system is described by the grand canonical ensemble, described by the density matrix
where the N1, N2, ... are the particle number operators for the different species of particles that are exchanged with the reservoir. Note that this is a density matrix including many more states (of varying N) compared to the canonical ensemble.
The grand partition function is
See also
Quantum thermodynamics
Thermal quantum field theory
Stochastic thermodynamics
Abstract Wiener space
References
J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955.
F. Reif, Statistical and Thermal Physics, McGraw-Hill, 1965.
Quantum mechanical entropy | Quantum statistical mechanics | [
"Physics"
] | 1,020 | [
"Quantum mechanical entropy",
"Entropy",
"Physical quantities"
] |
714,053 | https://en.wikipedia.org/wiki/Transduction%20%28genetics%29 | Transduction is the process by which foreign DNA is introduced into a cell by a virus or viral vector. An example is the viral transfer of DNA from one bacterium to another and hence an example of horizontal gene transfer. Transduction does not require physical contact between the cell donating the DNA and the cell receiving the DNA (which occurs in conjugation), and it is DNase resistant (transformation is susceptible to DNase). Transduction is a common tool used by molecular biologists to stably introduce a foreign gene into a host cell's genome (both bacterial and mammalian cells).
Discovery (bacterial transduction)
Transduction was discovered in Salmonella by Norton Zinder and Joshua Lederberg at the University of Wisconsin–Madison in 1952.
In the lytic and lysogenic cycles
Transduction happens through either the lytic cycle or the lysogenic cycle.
When bacteriophages (viruses that infect bacteria) that are lytic infect bacterial cells, they harness the replicational, transcriptional, and translation machinery of the host bacterial cell to make new viral particles (virions). The new phage particles are then released by lysis of the host. In the lysogenic cycle, the phage chromosome is integrated as a prophage into the bacterial chromosome, where it can stay dormant for extended periods of time. If the prophage is induced (by UV light for example), the phage genome is excised from the bacterial chromosome and initiates the lytic cycle, which culminates in lysis of the cell and the release of phage particles. Generalized transduction (see below) occurs in both cycles during the lytic stage, while specialized transduction (see below) occurs when a prophage is excised in the lysogenic cycle. As a method for transferring genetic material
Transduction by bacteriophages
The packaging of bacteriophage DNA into phage capsids has low fidelity. Small pieces of bacterial DNA may be packaged into the bacteriophage particles. There are two ways that this can lead to transduction.
Generalized transduction
Generalized transduction occurs when random pieces of bacterial DNA are packaged into a phage. It happens when a phage is in the lytic stage, at the moment that the viral DNA is packaged into phage heads. If the virus replicates using 'headful packaging', it attempts to fill the head with genetic material. If the viral genome results in spare capacity, viral packaging mechanisms may incorporate bacterial genetic material into the new virion. Alternatively, generalized transduction may occur via recombination. Generalized transduction is a rare event and occurs on the order of 1 phage in 11,000.
The new virus capsule that contains part bacterial DNA then infects another bacterial cell. When the bacterial DNA packaged into the virus is inserted into the recipient cell three things can happen to it:
The DNA is recycled for spare parts.
If the DNA was originally a plasmid, it will re-circularize inside the new cell and become a plasmid again.
If the new DNA matches with a homologous region of the recipient cell's chromosome, it will exchange DNA material similar to the actions in bacterial recombination.
Specialized transductionSpecialized transduction is the process by which a restricted set of bacterial genes is transferred to another bacterium. Those genes that are located adjacent to the prophage are transferred due to improper excision. Specialized transduction occurs when a prophage excises imprecisely from the chromosome so that bacterial genes lying adjacent to it are included in the excised DNA. The excised DNA along with the viral DNA is then packaged into a new virus particle, which is then delivered to a new bacterium when the phage attacks new bacterium. Here, the donor genes can be inserted into the recipient chromosome or remain in the cytoplasm, depending on the nature of the bacteriophage.
An example of specialized transduction is λ phage in Escherichia coli.
Lateral transductionLateral transduction is the process by which very long fragments of bacterial DNA are transferred to another bacterium. So far, this form of transduction has been only described in Staphylococcus aureus, but it can transfer more genes and at higher frequencies than generalized and specialized transduction. In lateral transduction, the prophage starts its replication in situ before excision in a process that leads to replication of the adjacent bacterial DNA. After which, packaging of the replicated phage from its pac site (located around the middle of the phage genome) and adjacent bacterial genes occurs in situ, to 105% of a phage genome size. Successive packaging after initiation from the original pac'' site leads to several kilobases of bacterial genes being packaged into new viral particles that are transferred to new bacterial strains. If the transferred genetic material in these transducing particles provides sufficient DNA for homologous recombination, the genetic material will be inserted into the recipient chromosome.
Because multiple copies of the phage genome are produced during in situ replication, some of these replicated prophages excise normally (instead of being packaged in situ), producing normal infectious phages.
Mammalian cell transduction with viral vectors
Transduction with viral vectors can be used to insert or modify genes in mammalian cells. It is often used as a tool in basic research and is actively researched as a potential means for gene therapy.
Process
In these cases, a plasmid is constructed in which the genes to be transferred are flanked by viral sequences that are used by viral proteins to recognize and package the viral genome into viral particles. This plasmid is inserted (usually by transfection) into a producer cell together with other plasmids (DNA constructs) that carry the viral genes required for the formation of infectious virions. In these producer cells, the viral proteins expressed by these packaging constructs bind the sequences on the DNA/RNA (depending on the type of viral vector) to be transferred and insert it into viral particles. For safety, none of the plasmids used contains all the sequences required for virus formation, so that simultaneous transfection of multiple plasmids is required to get infectious virions. Moreover, only the plasmid carrying the sequences to be transferred contains signals that allow the genetic materials to be packaged in virions so that none of the genes encoding viral proteins are packaged. Viruses collected from these cells are then applied to the cells to be altered. The initial stages of these infections mimic infection with natural viruses and lead to expression of the genes transferred and (in the case of lentivirus/retrovirus vectors) insertion of the DNA to be transferred into the cellular genome. However, since the transferred genetic material does not encode any of the viral genes, these infections do not generate new viruses (the viruses are "replication-deficient").
Some enhancers have been used to improve transduction efficiency such as polybrene, protamine sulfate, retronectin, and DEAE Dextran.
Medical applications
Gene therapy: Correcting genetic diseases by direct modification of genetic error.
See also
Electroporation – use of an electrical field to increase cell membrane permeability.
Phage therapy – therapeutic use of bacteriophages.
Transfection – means of inserting DNA into a cell.
Transformation (genetics) – means of inserting DNA into a cell.
Viral vector – commonly used tool to deliver genetic material into cells.
References
External links
Overview at ncbi.nlm.nih.gov
http://www.med.umich.edu/vcore/protocols/RetroviralCellScreenInfection13FEB2006.pdf (transduction protocol)
Generalized and Specialized transduction at sdsu.edu
Bacteriology
Bacteriophages
Modification of genetic information
Molecular biology
Virology | Transduction (genetics) | [
"Chemistry",
"Biology"
] | 1,637 | [
"Modification of genetic information",
"Molecular genetics",
"Biochemistry",
"Molecular biology"
] |
714,163 | https://en.wikipedia.org/wiki/Cross-correlation | In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.
In probability and statistics, the term cross-correlations refers to the correlations between the entries of two random vectors and , while the correlations of a random vector are the correlations between the entries of itself, those forming the correlation matrix of . If each of and is a scalar random variable which is realized repeatedly in a time series, then the correlations of the various temporal instances of are known as autocorrelations of , and the cross-correlations of with across time are temporal cross-correlations. In probability and statistics, the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1.
If and are two independent random variables with probability density functions and , respectively, then the probability density of the difference is formally given by the cross-correlation (in the signal-processing sense) ; however, this terminology is not used in probability and statistics. In contrast, the convolution (equivalent to the cross-correlation of and ) gives the probability density function of the sum .
Cross-correlation of deterministic signals
For continuous functions and , the cross-correlation is defined as:which is equivalent towhere denotes the complex conjugate of , and is called displacement or lag.
For highly-correlated and which have a maximum cross-correlation at a particular , a feature in at also occurs later in at , hence could be described to lag by .
If and are both continuous periodic functions of period , the integration from to is replaced by integration over any interval of length :which is equivalent toSimilarly, for discrete functions, the cross-correlation is defined as:which is equivalent to:For finite discrete functions , the (circular) cross-correlation is defined as:which is equivalent to:For finite discrete functions , , the kernel cross-correlation is defined as:where is a vector of kernel functions and is an affine transform.
Specifically, can be circular translation transform, rotation transform, or scale transform, etc. The kernel cross-correlation extends cross-correlation from linear space to kernel space. Cross-correlation is equivariant to translation; kernel cross-correlation is equivariant to any affine transforms, including translation, rotation, and scale, etc.
Explanation
As an example, consider two real valued functions and differing only by an unknown shift along the x-axis. One can use the cross-correlation to find how much must be shifted along the x-axis to make it identical to . The formula essentially slides the function along the x-axis, calculating the integral of their product at each position. When the functions match, the value of is maximized. This is because when peaks (positive areas) are aligned, they make a large contribution to the integral. Similarly, when troughs (negative areas) align, they also make a positive contribution to the integral because the product of two negative numbers is positive.
With complex-valued functions and , taking the conjugate of ensures that aligned peaks (or aligned troughs) with imaginary components will contribute positively to the integral.
In econometrics, lagged cross-correlation is sometimes referred to as cross-autocorrelation.
Properties
Cross-correlation of random vectors
Definition
For random vectors and , each containing random elements whose expected value and variance exist, the cross-correlation matrix of and is defined byand has dimensions . Written component-wise:The random vectors and need not have the same dimension, and either might be a scalar value.
Where is the expectation value.
Example
For example, if and are random vectors, then is a matrix whose -th entry is .
Definition for complex random vectors
If and are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of and is defined bywhere denotes Hermitian transposition.
Cross-correlation of stochastic processes
In time series analysis and statistics, the cross-correlation of a pair of random process is the correlation between values of the processes at different times, as a function of the two times. Let be a pair of random processes, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time .
Cross-correlation function
Suppose that the process has means and and variances and at time , for each . Then the definition of the cross-correlation between times and iswhere is the expected value operator. Note that this expression may be not defined.
Cross-covariance function
Subtracting the mean before multiplication yields the cross-covariance between times and :Note that this expression is not well-defined for all time series or processes, because the mean or variance may not exist.
Definition for wide-sense stationary stochastic process
Let represent a pair of stochastic processes that are jointly wide-sense stationary. Then the cross-covariance function and the cross-correlation function are given as follows.
Cross-correlation function
or equivalently
Cross-covariance function
or equivalently where and are the mean and standard deviation of the process , which are constant over time due to stationarity; and similarly for , respectively. indicates the expected value. That the cross-covariance and cross-correlation are independent of is precisely the additional information (beyond being individually wide-sense stationary) conveyed by the requirement that are jointly wide-sense stationary.
The cross-correlation of a pair of jointly wide sense stationary stochastic processes can be estimated by averaging the product of samples measured from one process and samples measured from the other (and its time shifts). The samples included in the average can be an arbitrary subset of all the samples in the signal (e.g., samples within a finite time window or a sub-sampling of one of the signals). For a large number of samples, the average converges to the true cross-correlation.
Normalization
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the cross-correlation function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "cross-correlation" and "cross-covariance" are used interchangeably.
The definition of the normalized cross-correlation of a stochastic process isIf the function is well-defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For jointly wide-sense stationary stochastic processes, the definition isThe normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
Properties
Symmetry property
For jointly wide-sense stationary stochastic processes, the cross-correlation function has the following symmetry property:Respectively for jointly WSS processes:
Time delay analysis
Cross-correlations are useful for determining the time delay between two signals, e.g., for determining time delays for the propagation of acoustic signals across a microphone array. After calculating the cross-correlation between the two signals, the maximum (or minimum if the signals are negatively correlated) of the cross-correlation function indicates the point in time where the signals are best aligned; i.e., the time delay between the two signals is determined by the argument of the maximum, or arg max of the cross-correlation, as inTerminology in image processing
Zero-normalized cross-correlation (ZNCC)
For image-processing applications in which the brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by the standard deviation. That is, the cross-correlation of a template with a subimage is
where is the number of pixels in and ,
is the average of and is standard deviation of .
In functional analysis terms, this can be thought of as the dot product of two normalized vectors. That is, ifandthen the above sum is equal towhere is the inner product and is the L² norm. Cauchy–Schwarz then implies that ZNCC has a range of .
Thus, if and are real matrices, their normalized cross-correlation equals the cosine of the angle between the unit vectors and , being thus if and only if equals multiplied by a positive scalar.
Normalized correlation is one of the methods used for template matching, a process used for finding instances of a pattern or object within an image. It is also the 2-dimensional version of Pearson product-moment correlation coefficient.
Normalized cross-correlation (NCC)
NCC is similar to ZNCC with the only difference of not subtracting the local mean value of intensities:
Nonlinear systems
Caution must be applied when using cross correlation function which assumes Gaussian variance for nonlinear systems. In certain circumstances, which depend on the properties of the input, cross correlation between the input and output of a system with nonlinear dynamics can be completely blind to certain nonlinear effects. This problem arises because some quadratic moments can equal zero and this can incorrectly suggest that there is little "correlation" (in the sense of statistical dependence) between two signals, when in fact the two signals are strongly related by nonlinear dynamics.
See also
Autocorrelation
Autocovariance
Coherence
Convolution
Correlation
Correlation function
Cross-correlation matrix
Cross-covariance
Cross-spectrum
Digital image correlation
Phase correlation
Scaled correlation
Spectral density
Wiener–Khinchin theorem
References
Further reading
External links
Cross Correlation from Mathworld
http://scribblethink.org/Work/nvisionInterface/nip.html
http://www.staff.ncl.ac.uk/oliver.hinton/eee305/Chapter6.pdf
Bilinear maps
Covariance and correlation
Signal processing
Time domain analysis | Cross-correlation | [
"Technology",
"Engineering"
] | 2,215 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
714,434 | https://en.wikipedia.org/wiki/Histopathology | Histopathology (compound of three Greek words: 'tissue', 'suffering', and -logia 'study of') is the microscopic examination of tissue in order to study the manifestations of disease. Specifically, in clinical medicine, histopathology refers to the examination of a biopsy or surgical specimen by a pathologist, after the specimen has been processed and histological sections have been placed onto glass slides. In contrast, cytopathology examines free cells or tissue micro-fragments (as "cell blocks
").
Collection of tissues
Histopathological examination of tissues starts with surgery, biopsy, or autopsy. The tissue is removed from the body or plant, and then, often following expert dissection in the fresh state, placed in a fixative which stabilizes the tissues to prevent decay. The most common fixative is 10% neutral buffered formalin (corresponding to 3.7% w/v formaldehyde in neutral buffered water, such as phosphate buffered saline).
Preparation for histology
The tissue is then prepared for viewing under a microscope using either chemical fixation or frozen section.
If a large sample is provided e.g. from a surgical procedure then a pathologist looks at the tissue sample and selects the part most likely to yield a useful and accurate diagnosis - this part is removed for examination in a process commonly known as grossing or cut up. Larger samples are cut to correctly situate their anatomical structures in the cassette. Certain specimens (especially biopsies) can undergo agar pre-embedding to assure correct tissue orientation in cassette & then in the block & then on the diagnostic microscopy slide. This is then placed into a plastic cassette for most of the rest of the process.
Chemical fixation
In addition to formalin, other chemical fixatives have been used. But, with the advent of immunohistochemistry (IHC) staining and diagnostic molecular pathology testing on these specimen samples, formalin has become the standard chemical fixative in human diagnostic histopathology. Fixation times for very small specimens are shorter, and standards exist in human diagnostic histopathology.
Processing
Water is removed from the sample in successive stages by the use of increasing concentrations of alcohol. Xylene is used in the last dehydration phase instead of alcohol - this is because the wax used in the next stage is soluble in xylene where it is not in alcohol allowing wax to permeate (infiltrate) the specimen. This process is generally automated and done overnight. The wax infiltrated specimen is then transferred to an individual specimen embedding (usually metal) container. Finally, molten wax is introduced around the specimen in the container and cooled to solidification so as to embed it in the wax block. This process is needed to provide a properly oriented sample sturdy enough for obtaining a thin microtome section(s) for the slide.
Once the wax embedded block is finished, sections will be cut from it and usually placed to float on a water bath surface which spreads the section out. This is usually done by hand and is a skilled job (histotechnologist) with the lab personnel making choices about which parts of the specimen microtome wax ribbon to place on slides. A number of slides will usually be prepared from different levels throughout the block. After this the thin section mounted slide is stained and a protective cover slip is mounted on it. For common stains, an automatic process is normally used; but rarely used stains are often done by hand.
Frozen section processing
An initial evaluation of a suspected lymphoma is to make a "touch prep" wherein a glass slide is lightly pressed against excised lymphoid tissue, and subsequently stained (usually H&E stain) for evaluation under light microscopy.
The second method of histology processing is called frozen section processing. This is a highly technical scientific method performed by a trained histoscientist. In this method, the tissue is frozen and sliced thinly using a microtome mounted in a below-freezing refrigeration device called the cryostat. The thin frozen sections are mounted on a glass slide, fixed immediately & briefly in liquid fixative, and stained using the similar staining techniques as traditional wax embedded sections. The advantages of this method is rapid processing time, less equipment requirement, and less need for ventilation in the laboratory. The disadvantage is the poor quality of the final slide. It is used in intra-operative pathology for determinations that might help in choosing the next step in surgery during that surgical session (for example, to preliminarily determine clearness of the resection margin of a tumor during surgery).
Staining of processed histology slides
This can be done to slides processed by the chemical fixation or frozen section slides. To see the tissue under a microscope, the sections are stained with one or more pigments. The aim of staining is to reveal cellular components; counterstains are used to provide contrast.
The most commonly used stain in histology is a combination of hematoxylin and eosin (often abbreviated H&E). Hematoxylin is used to stain nuclei blue, while eosin stains the cytoplasm and the extracellular connective tissue matrix of most cells pink. There are hundreds of various other techniques which have been used to selectively stain cells. Other compounds used to color tissue sections include safranin, Oil Red O, congo red, silver salts and artificial dyes. Histochemistry refers to the science of using chemical reactions between laboratory chemicals and components within tissue. A commonly performed histochemical technique is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like Hemochromatosis.
Recently, antibodies have been used to stain particular proteins, lipids and carbohydrates. Called immunohistochemistry, this technique has greatly increased the ability to specifically identify categories of cells under a microscope. Other advanced techniques include in situ hybridization to identify specific DNA or RNA molecules. These antibody staining methods often require the use of frozen section histology. These procedures above are also carried out in the laboratory under scrutiny and precision by a trained specialist medical laboratory scientist (a histoscientist). Digital cameras are increasingly used to capture histopathological images.
Interpretation
The histological slides are examined under a microscope by a pathologist, a medically qualified specialist who has completed a recognised training program. This medical diagnosis is formulated as a pathology report describing the histological findings and the opinion of the pathologist. In the case of cancer, this represents the tissue diagnosis required for most treatment protocols. In the removal of cancer, the pathologist will indicate whether the surgical margin is cleared, or is involved (residual cancer is left behind). This is done using either the bread loafing or CCPDMA method of processing. Microscopic visual artifacts can potentially cause misdiagnosis of samples. Scanning of slides allows for various methods of digital pathology, including the application of artificial intelligence for interpretation.
Following are examples of general features of suspicious findings that can be appreciated from low to high magnification on histopathology:
Architectural patterns
Major histopathologic architectural patterns include:
Nuclear patterns
Major nuclear patterns include:
In myocardial infarction
After a myocardial infarction (heart attack), no histopathology is seen the first ~30 minutes. The only possible sign the first 4 hours is waviness of fibres at border. Later, however, a coagulation necrosis is initiated, with edema and hemorrhage. After 12 hours, there can be seen karyopyknosis and hypereosinophilia of myocytes with contraction band necrosis in margins, as well as beginning of neutrophil infiltration. At 1 – 3 days there is continued coagulation necrosis with loss of nuclei and striations and an increased infiltration of neutrophils to interstitium. Until the end of the first week after infarction there is beginning of disintegration of dead muscle fibres, necrosis of neutrophils and beginning of macrophage removal of dead cells at border, which increases the succeeding days. After a week there is also beginning of granulation tissue formation at margins, which matures during the following month, and gets increased collagen deposition and decreased cellularity until the myocardial scarring is fully mature at approximately 2 months after infarction.
See also
Anatomical pathology
Molecular pathology
Frozen section procedure
Medical technologist
Laser capture microdissection
List of pathologists
References
External links
Virtual Histology Course - University of Zurich (German, English version in preparation)
Histopathology of the uterine cervix - digital atlas (IARC Screening Group)
Histopathology Virtual Slidebox - University of Iowa
Anatomical pathology
Pathology | Histopathology | [
"Chemistry",
"Biology"
] | 1,819 | [
"Pathology",
"Histopathology",
"Microscopy"
] |
714,543 | https://en.wikipedia.org/wiki/Br%C3%B8nsted%E2%80%93Lowry%20acid%E2%80%93base%20theory | The Brønsted–Lowry theory (also called proton theory of acids and bases) is an acid–base reaction theory which was first developed by Johannes Nicolaus Brønsted and Thomas Martin Lowry independently in 1923. The basic concept of this theory is that when an acid and a base react with each other, the acid forms its conjugate base, and the base forms its conjugate acid by exchange of a proton (the hydrogen cation, or H+). This theory generalises the Arrhenius theory.
Definitions of acids and bases
In the Arrhenius theory, acids are defined as substances that dissociate in aqueous solutions to give H+ (hydrogen ions or protons), while bases are defined as substances that dissociate in aqueous solutions to give OH− (hydroxide ions).
In 1923, physical chemists Johannes Nicolaus Brønsted in Denmark and Thomas Martin Lowry in England both independently proposed the theory named after them. In the Brønsted–Lowry theory acids and bases are defined by the way they react with each other, generalising them. This is best illustrated by an equilibrium equation.
acid + base ⇌ conjugate base + conjugate acid.
With an acid, HA, the equation can be written symbolically as:
HA + B <=> A- + HB+
The equilibrium sign, ⇌, is used because the reaction can occur in both forward and backward directions (is reversible). The acid, HA, is a proton donor which can lose a proton to become its conjugate base, A−. The base, B, is a proton acceptor which can become its conjugate acid, HB+. Most acid–base reactions are fast, so the substances in the reaction are usually in dynamic equilibrium with each other.
Aqueous solutions
Consider the following acid–base reaction:
CH3 COOH + H2O <=> CH3 COO- + H3O+
Acetic acid, , is an acid because it donates a proton to water () and becomes its conjugate base, the acetate ion (). is a base because it accepts a proton from and becomes its conjugate acid, the hydronium ion, ().
The reverse of an acid–base reaction is also an acid–base reaction, between the conjugate acid of the base in the first reaction and the conjugate base of the acid. In the above example, ethanoate is the base of the reverse reaction and hydronium ion is the acid.
H3O+ + CH3 COO- <=> CH3COOH + H2O
One feature of the Brønsted–Lowry theory in contrast to Arrhenius theory is that it does not require an acid to dissociate.
Amphoteric substances
The essence of Brønsted–Lowry theory is that an acid is only such in relation to a base, and vice versa. Water is amphoteric as it can act as an acid or as a base. In the image shown at the right one molecule of acts as a base and gains to become while the other acts as an acid and loses to become .
Another example is illustrated by substances like aluminium hydroxide, .
\overset{(acid)}{Al(OH)3}{} + OH- <=> Al(OH)4^-
3H+{} + \overset{(base)}{Al(OH)3} <=> 3H2O{} + Al_{(aq)}^3+
Non-aqueous solutions
The hydrogen ion, or hydronium ion, is a Brønsted–Lowry acid when dissolved in H2O and the hydroxide ion is a base because of the autoionization of water reaction
H2O + H2O <=> H3O+ + OH-
An analogous reaction occurs in liquid ammonia
NH3 + NH3 <=> NH4+ + NH2-
Thus, the ammonium ion, , in liquid ammonia corresponds to the hydronium ion in water and the amide ion, in ammonia, to the hydroxide ion in water. Ammonium salts behave as acids, and metal amides behave as bases.
Some non-aqueous solvents can behave as bases, i.e. accept protons, in relation to Brønsted–Lowry acids.
HA + S <=> A- + SH+
where S stands for a solvent molecule. The most important of such solvents are dimethylsulfoxide, DMSO, and acetonitrile, , as these solvents have been widely used to measure the acid dissociation constants of carbon-containing molecules. Because DMSO accepts protons more strongly than the acid becomes stronger in this solvent than in water. Indeed, many molecules behave as acids in non-aqueous solutions but not in aqueous solutions. An extreme case occurs with carbon acids, where a proton is extracted from a bond.
Some non-aqueous solvents can behave as acids. An acidic solvent will make dissolved substances more basic. For example, the compound is known as acetic acid since it behaves as an acid in water. However, it behaves as a base in liquid hydrogen fluoride, a much more acidic solvent.
CH3COOH + 2HF <=> CH3C(OH)2+ + HF2-
Comparison with Lewis acid–base theory
In the same year that Brønsted and Lowry published their theory, G. N. Lewis created an alternative theory of acid–base reactions. The Lewis theory is based on electronic structure. A Lewis base is a compound that can give an electron pair to a Lewis acid, a compound that can accept an electron pair. Lewis's proposal explains the Brønsted–Lowry classification using electronic structure.
HA + B <=> A- + BH+
In this representation both the base, B, and the conjugate base, A−, are shown carrying a lone pair of electrons and the proton, which is a Lewis acid, is transferred between them.
Lewis later wrote "To restrict the group of acids to those substances that contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen." In Lewis theory an acid, A, and a base, B, form an adduct, AB, where the electron pair forms a dative covalent bond between A and B. This is shown when the adduct H3N−BF3 forms from ammonia and boron trifluoride, a reaction that cannot occur in water because boron trifluoride hydrolizes in water.
4BF3 + 3H2O -> B(OH)3 + 3HBF4
The reaction above illustrates that BF3 is an acid in both Lewis and Brønsted–Lowry classifications and shows that the theories agree with each other.
Boric acid is recognised as a Lewis acid because of the reaction
B(OH)3 + H2O <=> B(OH)4- + H+
In this case the acid does not split up but the base, H2O, does. A solution of B(OH)3 is acidic because hydrogen ions are given off in this reaction.
There is strong evidence that dilute aqueous solutions of ammonia contain minute amounts of the ammonium ion
H2O + NH3 -> OH- + NH+4
and that, when dissolved in water, ammonia functions as a Lewis base.
Comparison with the Lux–Flood theory
The reactions between oxides in the solid or liquid states are excluded in the Brønsted–Lowry theory. For example, the reaction
2MgO + SiO2 -> Mg2 SiO4
is not covered in the Brønsted–Lowry definition of acids and bases. On the other hand, magnesium oxide acts as a base when it reacts with an aqueous solution of an acid.
2H+ + MgO(s) -> Mg^{2+}(aq) + H2O
Dissolved silicon dioxide, SiO2, has been predicted to be a weak acid in the Brønsted–Lowry sense.
SiO2(s) + 2H2O <=> Si(OH)4 (solution)
Si(OH)4 <=> Si(OH)3O- + H+
According to the Lux–Flood theory, oxides like MgO and SiO2 in the solid state may be called acids or bases. For example, the mineral olivine may be known as a compound of a basic oxide, MgO, and silicon dioxide, SiO2, as an acidic oxide. This is important in geochemistry.
References
Bibliography
Acid–base chemistry
Equilibrium chemistry | Brønsted–Lowry acid–base theory | [
"Chemistry"
] | 1,839 | [
"Equilibrium chemistry",
"Acid–base chemistry",
"nan"
] |
714,601 | https://en.wikipedia.org/wiki/Dalitz%20plot | The Dalitz plot is a two-dimensional plot often used in particle physics to represent the relative frequency of various (kinematically distinct) manners in which the products of certain (otherwise similar) three-body decays may move apart.
The phase-space of a decay of a pseudoscalar into three spin-0 particles can be completely described using two variables. In a traditional Dalitz plot, the axes of the plot are the squares of the invariant masses of two pairs of the decay products. (For example, if particle A decays to particles 1, 2, and 3, a Dalitz plot for this decay could plot m212 on the x-axis and m223 on the y-axis.) If there are no angular correlations between the decay products then the distribution of these variables is flat. However symmetries may impose certain restrictions on the distribution. Furthermore, three-body decays are often dominated by resonant processes, in which the particle decays into two decay products, with one of those decay products immediately decaying into two additional decay products. In this case, the Dalitz plot will show a non-uniform distribution, with a peak around the mass of the resonant decay. In this way, the Dalitz plot provides an excellent tool for studying the dynamics of three-body decays.
Dalitz plots play a central role in the discovery of new particles in current high-energy physics experiments, including Higgs boson research, and are tools in exploratory efforts that might open avenues beyond the Standard Model.
R.H. Dalitz introduced this technique in 1953 to study decays of K mesons (which at that time were still referred to as "tau-mesons"). It can be adapted to the analysis of four-body decays as well. A specific form of a four-particle Dalitz plot (for non-relativistic kinematics), which is based on a tetrahedral coordinate system, was first applied to study the few-body dynamics in atomic four-body fragmentation processes.
Square Dalitz plot
Modeling of the common representation of the Dalitz plot can be complicated due to its nontrivial shape. One can however introduce such kinematic variables so that Dalitz plot gets a rectangular (or squared) shape:
;
;
where is the invariant mass of particles 1 and 2 in a given decay event; and are its maximal and minimal kinematically allowed values, while is the angle between particles 1 and 3 in the rest frame of particles 1 and 2. This technique is commonly called "Square Dalitz plot" (SDP).
References
External links
Dalitz Plots: Past and Present (a presentation by Brian Lindquist at SLAC)
Plots (graphics)
Scattering
Experimental particle physics | Dalitz plot | [
"Physics",
"Chemistry",
"Materials_science"
] | 568 | [
"Scattering stubs",
"Scattering",
"Experimental particle physics",
"Experimental physics",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
715,297 | https://en.wikipedia.org/wiki/Solar%20System%20model | Solar System models, especially mechanical models, called orreries, that illustrate the relative positions and motions of the planets and moons in the Solar System have been built for centuries. While they often showed relative sizes, these models were usually not built to scale. The enormous ratio of interplanetary distances to planetary diameters makes constructing a scale model of the Solar System a challenging task. As one example of the difficulty, the distance between the Earth and the Sun is almost 12,000 times the diameter of the Earth.
If the smaller planets are to be easily visible to the naked eye, large outdoor spaces are generally necessary, as is some means for highlighting objects that might otherwise not be noticed from a distance. The Boston Museum of Science had placed bronze models of the planets in major public buildings, all on similar stands with interpretive labels. For example, the model of Jupiter was located in the cavernous South Station waiting area. The properly-scaled, basket-ball-sized model is 1.3 miles (2.14 km) from the model Sun which is located at the museum, graphically illustrating the immense empty space in the Solar System.
The objects in such large models do not move. Traditional orreries often did move, and some used clockworks to display the relative speeds of objects accurately. These can be thought of as being correctly scaled in time, instead of distance.
Permanent true scale models
Many towns and institutions have built outdoor scale models of the Solar System. Here is a table comparing these models with the actual system.
Other models of the Solar System: historic, temporary, virtual, or dual-scale
Several sets of geocaching caches have been laid out as Solar System models.
See also
Numerical model of the Solar System
Historical models of the Solar System
Infinite Corridor
References
External links
A list of websites related to Solar System models
The Otford Solar System
An accurate web-based scroll map of the Solar System scaled to the Moon being 1 pixel
An online scale model (does not work in some browsers)
An online 3D model
An article on the Solar System in Maine
An article about a temporary exhibit in Melbourne, Australia
A map with Solar System models in Germany
A tool to calculate the diameters and distances needed for an accurate scale model
To Scale: The Solar System - video of model built in desert with Earth as the size of a marble.
Model
Physics education
Scale modeling | Solar System model | [
"Physics",
"Astronomy"
] | 478 | [
"Scale modeling",
"Space art",
"Applied and interdisciplinary physics",
"Outer space",
"Physics education",
"Solar System models",
"Solar System"
] |
715,679 | https://en.wikipedia.org/wiki/Gaugino | In supersymmetry theories of particle physics, a gaugino is the hypothetical fermionic supersymmetric field quantum (superpartner) of a gauge field, as predicted by gauge theory combined with supersymmetry. All gauginos have a spin of 1/2, except for the gravitino, which has a spin of 3/2.
In the minimal supersymmetric extension of the standard model the following gauginos exist:
The gluino (symbol ) is the superpartner of the gluon, and hence carries color charge.
The gravitino (symbol ) is the supersymmetric partner of the graviton.
Three winos (symbol and W͂3) are the superpartners of the W bosons of the SU(2)L gauge fields.
The bino is the superpartner of the U(1) gauge field corresponding to weak hypercharge.
Sometimes the term "electroweakinos" is used to refer to winos and binos and on occasion also higgsinos. Note that in other SUSY models the zino () is the superpartner of the Z boson.
Mixing
Gauginos mix with higgsinos, the superpartners of the Higgs field's degrees of freedom, to form mass eigenstates called neutralinos, which are electrically neutral, and charginos, which are electrically charged.
In many supersymmetric models, the lightest supersymmetric particle (LSP), often a neutralino such as the photino, is stable. In that case it is a weakly interacting massive particle (WIMP) and a candidate for dark matter.
See also
Gaugino condensation
References
Supersymmetric quantum field theory
Hypothetical elementary particles | Gaugino | [
"Physics"
] | 377 | [
"Supersymmetric quantum field theory",
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model",
"Particle physics stubs",
"Hypothetical elementary particles",
"Supersymmetry",
"Symmetry"
] |
715,688 | https://en.wikipedia.org/wiki/Chargino | In particle physics, the chargino is a hypothetical particle which refers to the mass eigenstates of a charged superpartner, i.e. any new electrically charged fermion (with spin 1/2) predicted by supersymmetry. They are linear combinations of the charged wino and charged higgsinos. There are two charginos that are fermions and are electrically charged, which are typically labeled (the lightest) and (the heaviest), although sometimes and are also used to refer to charginos, when is used to refer to neutralinos. The heavier chargino can decay through the neutral Z boson to the lighter chargino. Both can decay through a charged W boson to a neutralino:
→ +
→ +
→ +
See also
List of hypothetical particles
Weakly interacting slender particle
References
External links
http://lepsusy.web.cern.ch/lepsusy/www/inoslowdmsummer02/charginolowdm_pub.html
Supersymmetric quantum field theory
Hypothetical elementary particles | Chargino | [
"Physics"
] | 229 | [
"Supersymmetric quantum field theory",
"Unsolved problems in physics",
"Particle physics",
"Particle physics stubs",
"Hypothetical elementary particles",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry"
] |
715,691 | https://en.wikipedia.org/wiki/Higgsino | In particle physics, for models with N = 1 supersymmetry, a higgsino, symbol , is the superpartner of the Higgs field. A higgsino is a Dirac fermionic field with spin and it refers to a weak isodoublet with hypercharge half under the Standard Model gauge symmetries. After electroweak symmetry breaking higgsino fields linearly mix with U(1) and SU(2) gauginos leading to four neutralinos and two charginos that refer to physical particles. While the two charginos are charged Dirac fermions (plus and minus each), the neutralinos are electrically neutral Majorana fermions. In an R-parity-conserving version of the Minimal Supersymmetric Standard Model, the lightest neutralino typically becomes the lightest supersymmetric particle (LSP). The LSP is a particle physics candidate for the dark matter of the universe since it cannot decay to particles with lighter mass. A neutralino LSP, depending on its composition can be bino, wino or higgsino dominated in nature and can have different zones of mass values in order to satisfy the estimated dark matter relic density. Commonly, a higgsino dominated LSP is often referred as a higgsino, in spite of the fact that a higgsino is not a physical state in the true sense.
In natural scenarios of SUSY, top squarks, bottom squarks, gluinos, and higgsino-enriched neutralinos and charginos are expected to be relatively light, enhancing their production cross sections. Higgsino searches have been performed by both the ATLAS and CMS experiments at the Large Hadron Collider at CERN, where physicists have searched for the direct electroweak pair production of Higgsinos. As of 2017, no experimental evidence for Higgsinos has been reported.
Mass
If dark matter is composed only of Higgsinos, then the Higgsino mass is 1.1 TeV. On the other hand, if dark matter has multiple components, then the Higgsino mass depends on the relevant multiverse distribution functions, making the mass of the Higgsino lighter.
mħ ≈ 1.1 (Ωħ/ΩDM)1/2 TeV
Footnotes
Supersymmetric quantum field theory
Fermions
Hypothetical elementary particles | Higgsino | [
"Physics",
"Materials_science"
] | 503 | [
"Symmetry",
"Matter",
"Supersymmetric quantum field theory",
"Fermions",
"Unsolved problems in physics",
"Subatomic particles",
"Condensed matter physics",
"Hypothetical elementary particles",
"Supersymmetry",
"Physics beyond the Standard Model"
] |
715,946 | https://en.wikipedia.org/wiki/Green%E2%80%93Schwarz%20mechanism | The Green–Schwarz mechanism (sometimes called the Green–Schwarz anomaly cancellation mechanism) is the main discovery that started the first superstring revolution in superstring theory.
Discovery
In 1984, Michael Green and John H. Schwarz realized that the anomaly in type I string theory with the gauge group SO(32) cancels because of an extra "classical" contribution from a 2-form field. They realized that one of the necessary conditions for a superstring theory to make sense is that the dimension of the gauge group of type I string theory must be 496 and then demonstrated this to be so.
In the original calculation, gauge anomalies, mixed anomalies, and gravitational anomalies were expected to arise from a hexagon Feynman diagram. For the special choice of the gauge group SO(32) or E8 x E8, however, the anomaly factorizes and may be cancelled by a tree diagram. In string theory, this indeed occurs. The tree diagram describes the exchange of a virtual quantum of the B-field. It is somewhat counterintuitive to see that a tree diagram cancels a one-loop diagram, but in reality, both of these diagrams arise as one-loop diagrams in superstring theory in which the anomaly cancellation is more transparent.
As recounted in The Elegant Universe'''s TV version, in the second episode, "The String's the Thing", section "Wrestling with String Theory", Green describes finding 496 on each side of the equals sign during a stormy night filled with lightning, and fondly recalls joking that "the gods are trying to prevent us from completing this calculation". Green soon entitled some of his subsequent lectures "The Theory of Everything".
Details
Anomalies in quantum theory arise from one-loop diagrams, with a chiral fermion in the loop and gauge fields, Ricci tensors, or global symmetry currents as the external legs. These diagrams have the form of a triangle in 4 spacetime dimensions, which generalizes to a hexagon in D = 10, thus involving 6 external lines. The interesting anomaly in SUSY D'' = 10 gauge theory is the hexagon which has a particular linear combination of the two-form gauge field strength and Ricci tensor, , for the external lines.
Green and Schwarz realized that one can add a so-called Chern–Simons term to the classical action,
having the form , where the integral is over the 10 dimensions, is the rank-two Kalb–Ramond field, and is
a gauge invariant combination of (with space-time indices not contracted), which is precisely one of the factors appearing in the hexagon anomaly.
If the variation of under the transformations of gauge field for and under general coordinate transformations is appropriately specified, then
the Green–Schwarz term , when combined with a trilinear vertex through exchange of a gauge boson, has precisely the right variation to cancel the hexagon anomaly.
References
Anomalies (physics)
Quantum gravity
String theory | Green–Schwarz mechanism | [
"Physics",
"Astronomy"
] | 619 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Quantum gravity",
"String theory",
"Physics beyond the Standard Model"
] |
716,401 | https://en.wikipedia.org/wiki/Cross-polytope | In geometry, a cross-polytope, hyperoctahedron, orthoplex, staurotope, or cocube is a regular, convex polytope that exists in n-dimensional Euclidean space. A 2-dimensional cross-polytope is a square, a 3-dimensional cross-polytope is a regular octahedron, and a 4-dimensional cross-polytope is a 16-cell. Its facets are simplexes of the previous dimension, while the cross-polytope's vertex figure is another cross-polytope from the previous dimension.
The vertices of a cross-polytope can be chosen as the unit vectors pointing along each co-ordinate axis – i.e. all the permutations of . The cross-polytope is the convex hull of its vertices.
The n-dimensional cross-polytope can also be defined as the closed unit ball (or, according to some authors, its boundary) in the ℓ1-norm on Rn:
In 1 dimension the cross-polytope is simply the line segment [−1, +1], in 2 dimensions it is a square (or diamond) with vertices {(±1, 0), (0, ±1)}. In 3 dimensions it is an octahedron—one of the five convex regular polyhedra known as the Platonic solids. This can be generalised to higher dimensions with an n-orthoplex being constructed as a bipyramid with an (n−1)-orthoplex base.
The cross-polytope is the dual polytope of the hypercube. The 1-skeleton of an n-dimensional cross-polytope is the Turán graph T(2n, n) (also known as a cocktail party graph ).
4 dimensions
The 4-dimensional cross-polytope also goes by the name hexadecachoron or 16-cell. It is one of the six convex regular 4-polytopes. These 4-polytopes were first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century.
Higher dimensions
The cross-polytope family is one of three regular polytope families, labeled by Coxeter as βn, the other two being the hypercube family, labeled as γn, and the simplex family, labeled as αn. A fourth family, the infinite tessellations of hypercubes, he labeled as δn.
The n-dimensional cross-polytope has 2n vertices, and 2n facets ((n − 1)-dimensional components) all of which are (n − 1)-simplices. The vertex figures are all (n − 1)-cross-polytopes. The Schläfli symbol of the cross-polytope is {3,3,...,3,4}.
The dihedral angle of the n-dimensional cross-polytope is . This gives: δ2 = arccos(0/2) = 90°, δ3 = arccos(−1/3) = 109.47°, δ4 = arccos(−2/4) = 120°, δ5 = arccos(−3/5) = 126.87°, ... δ∞ = arccos(−1) = 180°.
The hypervolume of the n-dimensional cross-polytope is
For each pair of non-opposite vertices, there is an edge joining them. More generally, each set of k + 1 orthogonal vertices corresponds to a distinct k-dimensional component which contains them. The number of k-dimensional components (vertices, edges, faces, ..., facets) in an n-dimensional cross-polytope is thus given by (see binomial coefficient):
The extended f-vector for an n-orthoplex can be computed by (1,2)n, like the coefficients of polynomial products. For example a 16-cell is (1,2)4 = (1,4,4)2 = (1,8,24,32,16).
There are many possible orthographic projections that can show the cross-polytopes as 2-dimensional graphs. Petrie polygon projections map the points into a regular 2n-gon or lower order regular polygons. A second projection takes the 2(n−1)-gon petrie polygon of the lower dimension, seen as a bipyramid, projected down the axis, with 2 vertices mapped into the center.
The vertices of an axis-aligned cross polytope are all at equal distance from each other in the Manhattan distance (L1 norm). Kusner's conjecture states that this set of 2d points is the largest possible equidistant set for this distance.
Generalized orthoplex
Regular complex polytopes can be defined in complex Hilbert space called generalized orthoplexes (or cross polytopes), β = 2{3}2{3}...2{4}p, or ... Real solutions exist with p = 2, i.e. β = βn = 2{3}2{3}...2{4}2 = {3,3,..,4}. For p > 2, they exist in . A p-generalized n-orthoplex has pn vertices. Generalized orthoplexes have regular simplexes (real) as facets. Generalized orthoplexes make complete multipartite graphs, β make Kp,p for complete bipartite graph, β make Kp,p,p for complete tripartite graphs. β creates Kpn. An orthogonal projection can be defined that maps all the vertices equally-spaced on a circle, with all pairs of vertices connected, except multiples of n. The regular polygon perimeter in these orthogonal projections is called a petrie polygon.
Related polytope families
Cross-polytopes can be combined with their dual cubes to form compound polytopes:
In two dimensions, we obtain the octagrammic star figure ,
In three dimensions we obtain the compound of cube and octahedron,
In four dimensions we obtain the compound of tesseract and 16-cell.
See also
List of regular polytopes
Hyperoctahedral group, the symmetry group of the cross-polytope
Citations
References
pp. 121-122, §7.21. see illustration Fig 7.2B
p. 296, Table I (iii): Regular Polytopes, three regular polytopes in n-dimensions (n≥5)
External links
Regular polytopes
Multi-dimensional geometry | Cross-polytope | [
"Physics"
] | 1,406 | [
"Uniform polytopes",
"Symmetry",
"Regular polytopes"
] |
16,977,033 | https://en.wikipedia.org/wiki/Kugel%E2%80%93Khomskii%20coupling | Kugel–Khomskii coupling describes a coupling between the spin and orbital degrees of freedom in a solid; it is named after the Russian physicists Kliment I. Kugel (Климент Ильич Кугель) and Daniel I. Khomskii (Daniil I. Khomskii, Даниил Ильич Хомский). The Hamiltonian used is:
References
Condensed matter physics | Kugel–Khomskii coupling | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 105 | [
"Materials science stubs",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
16,977,319 | https://en.wikipedia.org/wiki/Absolute%20molar%20mass | Absolute molar mass is a process used to determine the characteristics of molecules.
History
The first absolute measurements of molecular weights (i.e. made without reference to standards) were based on fundamental physical characteristics and their relation to the molar mass. The most useful of these were membrane osmometry and sedimentation.
Another absolute instrumental approach was also possible with the development of light scattering theory by Albert Einstein, Chandrasekhara Venkata Raman, Peter Debye, Bruno H. Zimm, and others. The problem with measurements made using membrane osmometry and sedimentation was that they only characterized the bulk properties of the polymer sample. Moreover, the measurements were excessively time consuming and prone to operator error. In order to gain information about a polydisperse mixture of molar masses, a method for separating the different sizes was developed. This was achieved by the advent of size exclusion chromatography (SEC). SEC is based on the fact that the pores in the packing material of chromatography columns could be made small enough for molecules to become temporarily lodged in their interstitial spaces. As the sample makes its way through a column the smaller molecules spend more time traveling in these void spaces than the larger ones, which have fewer places to "wander". The result is that a sample is separated according to its hydrodynamic volume . As a consequence, the big molecules come out first, and then the small ones follow in the eluent. By choosing a suitable column packing material it is possible to define the resolution of the system. Columns can also be combined in series to increase resolution or the range of sizes studied.
The next step is to convert the time at which the samples eluted into a measurement of molar mass. This is possible because if the molar mass of a standard were known, the time at which this standard eluted should be equal to a specific molar mass. Using multiple standards, a calibration curve of time versus molar mass can be developed. This is significant for polymer analysis because a single polymer could be shown to have many different components, and the complexity and distribution of which would also affect the physical properties. However this technique has shortcomings. For example, unknown samples are always measured in relation to known standards, and these standards may or may not have similarities to the sample of interest. The measurements made by SEC are then mathematically converted into data similar to that found by the existing techniques.
The problem was that the system was calibrated according to the Vh characteristics of polymer standards that are not directly related to the molar mass. If the relationship between the molar mass and Vh of the standard is not the same as that of the unknown sample, then the calibration is invalid. Thus, to be accurate, the calibration must use the same polymer, of the same conformation, in the same eluent and have the same interaction with the solvent as the hydration layer changes Vh.
Benoit et al. showed that taking into account the hydrodynamic volume would solve the problem. In his publication, Benoit showed that all synthetic polymers elutes on the same curve when the log of the intrinsic viscosity multiplied by the molar mass was plotted against the elution volume. This is the basis of universal calibration which requires a viscometer to measure the intrinsic viscosity of the polymers. Universal calibration was shown to work for branched polymers, copolymers as well as starburst polymers.
For good chromatography, there must be no interaction with the column other than that produced by size. As the demands on polymer properties increased, the necessity of getting absolute information on the molar mass and size also increased. This was especially important in pharmaceutical applications where slight changes in molar mass (e.g. aggregation) or shape may result in different biological activity. These changes can actually have a harmful effect instead of a beneficial one.
To obtain molar mass, light scattering instruments need to measure the intensity of light scattered at zero angle. This is impractical as the laser source would outshine the light scattering intensity at zero angle. The 2 alternatives are to measure very close to zero angle or to measure at many angle and extrapolate using a model (Rayleigh, Rayleigh–Gans–Debye, Berry, Mie, etc.) to zero degree angle.
Traditional light scattering instruments worked by taking readings from multiple angles, each being measured in series. A low angle light scattering system was developed in the early 1970s that allowed a single measurement to be used to calculate the molar mass. Although measurements at low angles are better for fundamental physical reasons (molecules tend to scatter more light in lower angle directions than in higher angles), low angle scattering events caused by dust and contamination of the mobile phase easily overwhelm the scattering from the molecules of interest. When the low-angle laser light scattering (LALLS) became popular in the 1970s and mid-1980s, good quality disposable filters were not readily available and hence multi-angle measurements gained favour.
Multi-angle light scattering was invented in the mid-1980s and instruments like that were able to make measurements at the different angles simultaneously but it was not until the later 1980s that the connection of multi-angle laser light scattering (MALS) detectors to SEC systems was a practical proposition enabling both molar mass and size to be determined from each slice of the polymer fraction.
Applications
Light scattering measurements can be applied to synthetic polymers, proteins, pharmaceuticals and particles such as liposomes, micelles, and encapsulated proteins. Measurements can be made in one of two modes which are un-fractionated (batch mode) or in continuous flow mode (with SEC, HPLC or any other flow fractionation method). Batch mode experiments can be performed either by injecting a sample into a flow cell with a syringe or with the use of discrete vials. These measurements are most often used to measure timed events like antibody-antigen reactions or protein assembly. Batch mode measurements can also be used to determine the second virial coefficient (A2), a value that gives a measure of the likelihood of crystallization or aggregation in a given solvent. Continuous flow experiments can be used to study material eluting from virtually any source. More conventionally, the detectors are coupled to a variety of different chromatographic separation systems. The ability to determine the mass and size of the materials eluting then combines the advantage of the separation system with an absolute measurement of the mass and size of the species eluting.
The addition of an SLS detector coupled downstream to a chromatographic system allows the utility of SEC or similar separation combined with the advantage of an absolute detection method. The light scattering data is purely dependent on the light scattering signal times the concentration; the elution time is irrelevant and the separation can be changed for different samples without recalibration. In addition, a non-size separation method such as HPLC or IC can also be used.
As the light scattering detector is mass dependent, it becomes more sensitive as the molar mass increases. Thus it is an excellent tool for detecting aggregation. The higher the aggregation number, the more sensitive the detector becomes.
Low-angle (laser)-light scattering (LALS) method
LALS measurements are measuring at a very low angle where the scattering vector is almost zero. LALS does not need any model to fit the angular dependence and hence is giving more reliable molecular weights measurements for large molecules. LALS alone does not give any indication of the root mean square radius.
Multi-angle (laser)-light scattering (MALS) method
MALS measurements work by calculating the amount of light scattered at each angle detected. The calculation is based on the intensity of light measured and the quantum efficiency of each detector. Then a model is used to approximate the intensity of light scattered at zero angle. The zero angle light scattered is then related to the molar mass.
As previously noted, the MALS detector can also provide information about the size of the molecule. This information is the Root Mean Square radius of the molecule (RMS or Rg). This is different from the Rh mentioned above who is taking the hydration layer into account. The purely mathematical root mean square radius is defined as the radii making up the molecule multiplied by the mass at that radius.
Bibliography
A. Einstein, Ann. Phys. 33 (1910), 1275
C.V. Raman, Indian J. Phys. 2 (1927), 1
P.Debye, J. Appl. Phys. 15 (1944), 338
B.H. Zimm, J. Chem. Phys. 13 (1945), 141
B.H. Zimm, J. Chem. Phys. 16 (1948), 1093
B.H. Zimm, R.S. Stein and P. Dotty, Pol. Bull. 1,(1945), 90
M. Fixman, J. Chem. Phys. 23 (1955), 2074
A.C. Ouano and W. Kaye J. Poly. Sci. A1(12) (1974), 1151
Z. Grubisic, P. Rempp, and H. Benoit, J. Polym. Sci., 5 (1967), 753
Flow Through MALS detector, DLS 800, Science Spectrum Inc.
P.J. Wyatt, C. Jackson and G.K. Wyatt Am. Lab 20(6) (1988), 86
P.J. Wyatt, D. L. Hicks, C. Jackson and G.K. Wyatt Am. Lab. 20(6) (1988), 106
C. Jackson, L.M. Nilsson and P.J. Wyatt J. Appl. Poly. Sci. 43 (1989), 99
Chemical properties
Mass | Absolute molar mass | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,034 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"nan",
"Wikipedia categories named after physical quantities",
"Matter"
] |
18,189,758 | https://en.wikipedia.org/wiki/Slush%20hydrogen | Slush hydrogen is a combination of liquid hydrogen and solid hydrogen at the triple point with a lower temperature and a higher density than liquid hydrogen. It is commonly formed by repeating a freeze-thaw process. This is most easily done by bringing liquid hydrogen near its boiling point and then reducing pressure using a vacuum pump. The decrease in pressure causes the liquid hydrogen to vaporize/boil - which removes latent heat, and ultimately decreases the temperature of the liquid hydrogen. Solid hydrogen is formed on the surface of the boiling liquid (between the gas/liquid interface) as the liquid is cooled and reaches its triple point. The vacuum pump is stopped, causing an increase of pressure, the solid hydrogen formed on the surface partially melts and begins to sink. The solid hydrogen is agitated in the liquid and the process is repeated. The resulting hydrogen slush has an increased density of 16–20% when compared to liquid hydrogen. It is proposed as a rocket fuel in place of liquid hydrogen in order to use smaller fuel tanks and thus reduce the dry weight of the vehicle.
Production
The continuous freeze technique used for slush hydrogen involves pulling a continuous vacuum
over triple point liquid and using a solid hydrogen mechanical ice-breaker to disrupt the surface of the freezing
hydrogen.
Fuel density: 0.085 g/cm3
Melting point: −259 °C
Boiling point: −253 °C
See also
Compressed hydrogen
Hydrogen safety
Metallic hydrogen
Timeline of hydrogen technologies
Liquefaction of gases
References
Hydrogen physics
Hydrogen technologies
Hydrogen storage
Liquid fuels
Rocket fuels
Coolants
Cryogenics | Slush hydrogen | [
"Physics"
] | 311 | [
"Applied and interdisciplinary physics",
"Cryogenics"
] |
4,535,333 | https://en.wikipedia.org/wiki/NIH%20shift | An NIH shift is a chemical rearrangement where a hydrogen atom on an aromatic ring undergoes an intramolecular migration primarily during a hydroxylation reaction. This process is also known as a 1,2-hydride shift. These shifts are often studied and observed by isotopic labeling. An example of an NIH shift is shown below:
In this example, a hydrogen atom has been isotopically labeled using deuterium (shown in red). As the hydroxylase adds a hydroxyl (the −OH group), the labeled site shifts one position around the aromatic ring relative to the stationary methyl group (−CH3).
Several hydroxylase enzymes are believed to incorporate an NIH shift in their mechanism, including 4-hydroxyphenylpyruvate dioxygenase and the tetrahydrobiopterin dependent hydroxylases. The name NIH shift arises from the US National Institutes of Health from where studies first reported observing this transformation.
References
.
.
Enzymes
Post-translational modification
Reaction mechanisms | NIH shift | [
"Chemistry"
] | 217 | [
"Reaction mechanisms",
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Physical organic chemistry",
"Chemical kinetics"
] |
4,535,852 | https://en.wikipedia.org/wiki/Carbon%20sequestration | Carbon sequestration is the process of storing carbon in a carbon pool. It plays a crucial role in limiting climate change by reducing the amount of carbon dioxide in the atmosphere. There are two main types of carbon sequestration: biologic (also called biosequestration) and geologic.
Biologic carbon sequestration is a naturally occurring process as part of the carbon cycle. Humans can enhance it through deliberate actions and use of technology. Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These processes can be accelerated for example through changes in land use and agricultural practices, called carbon farming. Artificial processes have also been devised to produce similar effects. This approach is called carbon capture and storage. It involves using technology to capture and sequester (store) that is produced from human activities underground or under the sea bed.
Plants, such as forests and kelp beds, absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores may be temporary carbon sinks, as long-term sequestration cannot be guaranteed. Wildfires, disease, economic pressures, and changing political priorities may release the sequestered carbon back into the atmosphere.
Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it underground, or in the form of insoluble carbonate salts. The latter process is called mineral sequestration. These methods are considered non-volatile because they not only remove carbon dioxide from the atmosphere but also sequester it indefinitely. This means the carbon is "locked away" for thousands to millions of years.
To enhance carbon sequestration processes in oceans the following chemical or physical technologies have been proposed: ocean fertilization, artificial upwelling, basalt storage, mineralization and deep-sea sediments, and adding bases to neutralize acids. However, none have achieved large scale application so far. Large-scale seaweed farming on the other hand is a biological process and could sequester significant amounts of carbon. The potential growth of seaweed for carbon farming would see the harvested seaweed transported to the deep ocean for long-term burial. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" on seaweed farming as a mitigation tactic.
Terminology
The term carbon sequestration is used in different ways in the literature and media. The IPCC Sixth Assessment Report defines it as "The process of storing carbon in a carbon pool". Subsequently, a pool is defined as "a reservoir in the Earth system where elements, such as carbon and nitrogen, reside in various chemical forms for a period of time".
The United States Geological Survey (USGS) defines carbon sequestration as follows: "Carbon sequestration is the process of capturing and storing atmospheric carbon dioxide." Therefore, the difference between carbon sequestration and carbon capture and storage (CCS) is sometimes blurred in the media. The IPCC, however, defines CCS as "a process in which a relatively pure stream of carbon dioxide (CO2) from industrial sources is separated, treated and transported to a long-term storage location".
Roles
In nature
Carbon sequestration is part of the natural carbon cycle by which carbon is exchanged among the biosphere, pedosphere (soil), geosphere, hydrosphere, and atmosphere of Earth. Carbon dioxide is naturally captured from the atmosphere through biological, chemical, or physical processes, and stored in long-term reservoirs.
Plants, such as forests and kelp beds, absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as long-term sequestration cannot be guaranteed. Events such as wildfires or disease, economic pressures, and changing political priorities can result in the sequestered carbon being released back into the atmosphere.
In climate change mitigation and policies
Carbon sequestration - when acting as a carbon sink - helps to mitigate climate change and thus reduce harmful effects of climate change. It helps to slow the atmospheric and marine accumulation of greenhouse gases, which is mainly carbon dioxide released by burning fossil fuels.
Carbon sequestration, when applied for climate change mitigation, can either build on enhancing naturally occurring carbon sequestration or use technology for carbon sequestration processes.
Within the carbon capture and storage approaches, carbon sequestration refers to the storage component. Artificial carbon storage technologies can be applied, such as gaseous storage in deep geological formations (including saline formations and exhausted gas fields), and solid storage by reaction of CO2 with metal oxides to produce stable carbonates.
For carbon to be sequestered artificially (i.e. not using the natural processes of the carbon cycle) it must first be captured, or it must be significantly delayed or prevented from being re-released into the atmosphere (by combustion, decay, etc.) from an existing carbon-rich material, by being incorporated into an enduring usage (such as in construction). Thereafter it can be passively stored or remain productively utilized over time in a variety of ways. For instance, upon harvesting, wood (as a carbon-rich material) can be incorporated into construction or a range of other durable products, thus sequestering its carbon over years or even centuries. In industrial production, engineers typically capture carbon dioxide from emissions from power plants or factories.
For example, in the United States, the Executive Order 13990 (officially titled "Protecting Public Health and the Environment and Restoring Science to Tackle the Climate Crisis") from 2021, includes several mentions of carbon sequestration via conservation and restoration of carbon sink ecosystems, such as wetlands and forests. The document emphasizes the importance of farmers, landowners, and coastal communities in carbon sequestration. It directs the Treasury Department to promote conservation of carbon sinks through market based mechanisms.
Biological carbon sequestration on land
Biological carbon sequestration (also called biosequestration) is the capture and storage of the atmospheric greenhouse gas carbon dioxide by continualor enhanced biological processes. This form of carbon sequestration occurs through increased rates of photosynthesis via land-use practices such as reforestation and sustainable forest management. Land-use changes that enhance natural carbon capture have the potential to capture and store large amounts of carbon dioxide each year. These include the conservation, management, and restoration of ecosystems such as forests, peatlands, wetlands, and grasslands, in addition to carbon sequestration methods in agriculture. Methods and practices exist to enhance soil carbon sequestration in both agriculture and forestry.
Forestry
Forests are an important part of the global carbon cycle because trees and plants absorb carbon dioxide through photosynthesis. Therefore, they play an important role in climate change mitigation. By removing the greenhouse gas carbon dioxide from the air, forests function as terrestrial carbon sinks, meaning they store large amounts of carbon in the form of biomass, encompassing roots, stems, branches, and leaves. Throughout their lifespan, trees continue to sequester carbon, storing atmospheric CO2 long-term. Sustainable forest management, afforestation, reforestation are therefore important contributions to climate change mitigation.
An important consideration in such efforts is that forests can turn from sinks to carbon sources. In 2019 forests took up a third less carbon than they did in the 1990s, due to higher temperatures, droughts and deforestation. The typical tropical forest may become a carbon source by the 2060s.
Researchers have found that, in terms of environmental services, it is better to avoid deforestation than to allow for deforestation to subsequently reforest, as the latter leads to irreversible effects in terms of biodiversity loss and soil degradation. Furthermore, the probability that legacy carbon will be released from soil is higher in younger boreal forest. Global greenhouse gas emissions caused by damage to tropical rainforests may have been substantially underestimated until around 2019. Additionally, the effects of afforestation and reforestation will be farther in the future than keeping existing forests intact. It takes much longer − several decades − for the benefits for global warming to manifest to the same carbon sequestration benefits from mature trees in tropical forests and hence from limiting deforestation. Therefore, scientists consider "the protection and recovery of carbon-rich and long-lived ecosystems, especially natural forests" to be "the major climate solution".
The planting of trees on marginal crop and pasture lands helps to incorporate carbon from atmospheric into biomass. For this carbon sequestration process to succeed the carbon must not return to the atmosphere from biomass burning or rotting when the trees die. To this end, land allotted to the trees must not be converted to other uses. Alternatively, the wood from them must itself be sequestered, e.g., via biochar, bioenergy with carbon capture and storage, landfill or stored by use in construction.
Earth offers enough room to plant an additional 0.9 billion ha of tree canopy cover, although this estimate has been criticized, and the true area that has a net cooling effect on the climate when accounting for biophysical feedbacks like albedo is 20-80% lower. Planting and protecting these trees would sequester 205 billion tons of carbon if the trees survive future climate stress to reach maturity. To put this number into perspective, this is about 20 years of current global carbon emissions (as of 2019) . This level of sequestration would represent about 25% of the atmosphere's carbon pool in 2019.
Life expectancy of forests varies throughout the world, influenced by tree species, site conditions, and natural disturbance patterns. In some forests, carbon may be stored for centuries, while in other forests, carbon is released with frequent stand replacing fires. Forests that are harvested prior to stand replacing events allow for the retention of carbon in manufactured forest products such as lumber. However, only a portion of the carbon removed from logged forests ends up as durable goods and buildings. The remainder ends up as sawmill by-products such as pulp, paper, and pallets. If all new construction globally utilized 90% wood products, largely via adoption of mass timber in low rise construction, this could sequester 700 million net tons of carbon per year. This is in addition to the elimination of carbon emissions from the displaced construction material such as steel or concrete, which are carbon-intense to produce.
A meta-analysis found that mixed species plantations would increase carbon storage alongside other benefits of diversifying planted forests.
Although a bamboo forest stores less total carbon than a mature forest of trees, a bamboo plantation sequesters carbon at a much faster rate than a mature forest or a tree plantation. Therefore, the farming of bamboo timber may have significant carbon sequestration potential.
The Food and Agriculture Organization (FAO) reported that: "The total carbon stock in forests decreased from 668 gigatonnes in 1990 to 662 gigatonnes in 2020". In Canada's boreal forests as much as 80% of the total carbon is stored in the soils as dead organic matter.
The IPCC Sixth Assessment Report says: "Secondary forest regrowth and restoration of degraded forests and non-forest ecosystems can play a large role in carbon sequestration (high confidence) with high resilience to disturbances and additional benefits such as enhanced biodiversity."
Impacts on temperature are affected by the location of the forest. For example, reforestation in boreal or subarctic regions has less impact on climate. This is because it substitutes a high-albedo, snow-dominated region with a lower-albedo forest canopy. By contrast, tropical reforestation projects lead to a positive change such as the formation of clouds. These clouds then reflect the sunlight, lowering temperatures.
Planting trees in tropical climates with wet seasons has another advantage. In such a setting, trees grow more quickly (fixing more carbon) because they can grow year-round. Trees in tropical climates have, on average, larger, brighter, and more abundant leaves than non-tropical climates. A study of the girth of 70,000 trees across Africa has shown that tropical forests fix more carbon dioxide pollution than previously realized. The research suggested almost one-fifth of fossil fuel emissions are absorbed by forests across Africa, Amazonia and Asia. Simon Lewis stated, "Tropical forest trees are absorbing about 18% of the carbon dioxide added to the atmosphere each year from burning fossil fuels, substantially buffering the rate of change."
Wetlands
Wetland restoration involves restoring a wetland's natural biological, geological, and chemical functions through re-establishment or rehabilitation. It is a good way to reduce climate change. Wetland soil, particularly in coastal wetlands such as mangroves, sea grasses, and salt marshes, is an important carbon reservoir; 20–30% of the world's soil carbon is found in wetlands, while only 5–8% of the world's land is composed of wetlands. Studies have shown that restored wetlands can become productive sinks and many are being restored. Aside from climate benefits, wetland restoration and conservation can help preserve biodiversity, improve water quality, and aid with flood control.
The plants that makeup wetlands absorb carbon dioxide () from the atmosphere and convert it into organic matter. The waterlogged nature of the soil slows down the decomposition of organic material, leading to the accumulation of carbon-rich sediments, acting as a long-term carbon sink. Also, anaerobic conditions in waterlogged soils hinder the complete breakdown of organic matter, promoting the conversion of carbon into more stable forms.
As with forests, for the sequestration process to succeed, the wetland must remain undisturbed. If it is disturbed the carbon stored in the plants and sediments will be released back into the atmosphere, and the ecosystem will no longer function as a carbon sink. Additionally, some wetlands can release non- greenhouse gases, such as methane and nitrous oxide which could offset potential climate benefits. The amounts of carbon sequestered via blue carbon by wetlands can also be difficult to measure.
Wetland soil is an important carbon sink; 14.5% of the world's soil carbon is found in wetlands, while only 5.5% of the world's land is composed of wetlands. Not only are wetlands a great carbon sink, they have many other benefits like collecting floodwater, filtering out air and water pollutants, and creating a home for numerous birds, fish, insects, and plants.
Climate change could alter wetland soil carbon storage, changing it from a sink to a source.With rising temperatures comes an increase in greenhouse gasses from wetlands especially locations with permafrost. When this permafrost melts it increases the available oxygen and water in the soil. Because of this, bacteria in the soil would create large amounts of carbon dioxide and methane to be released into the atmosphere.
The link between climate change and wetlands is still not fully known.It is also not clear how restored wetlands manage carbon while still being a contributing source of methane. However, preserving these areas would help prevent further release of carbon into the atmosphere.
Peatlands, mires and peat bogs
Despite occupying only 3% of the global land area, peatlands hold approximately 30% of the carbon in our ecosystem - twice the amount stored in the world's forests. Most peatlands are situated in high latitude areas of the northern hemisphere, with most of their growth occurring since the last ice age, but they are also found in tropical regions, such as the Amazon and Congo Basin.
Peatlands grow steadily over thousands of years, accumulating dead plant material – and the carbon contained within it – due to waterlogged conditions which greatly slow rates of decay. If peatlands are drained, for farmland or development, the plant material stored within them decomposes rapidly, releasing stored carbon. These degraded peatlands account for 5-10% of global carbon emissions from human activities. The loss of one peatland could potentially produce more carbon than 175–500 years of methane emissions.
Peatland protection and restoration are therefore important measures to mitigate carbon emissions, and also provides benefits for biodiversity, freshwater provision, and flood risk reduction.
Agriculture
Compared to natural vegetation, cropland soils are depleted in soil organic carbon (SOC). When soil is converted from natural land or semi-natural land, such as forests, woodlands, grasslands, steppes, and savannas, the SOC content in the soil reduces by about 30–40%. This loss is due to harvesting, as plants contain carbon. When land use changes, the carbon in the soil will either increase or decrease, and this change will continue until the soil reaches a new equilibrium. Deviations from this equilibrium can also be affected by variated climate.
The decreasing of SOC content can be counteracted by increasing the carbon input. This can be done with several strategies, e.g. leave harvest residues on the field, use manure as fertilizer, or include perennial crops in the rotation. Perennial crops have a larger below-ground biomass fraction, which increases the SOC content.
Perennial crops reduce the need for tillage and thus help mitigate soil erosion, and may help increase soil organic matter. Globally, soils are estimated to contain >8,580 gigatons of organic carbon, about ten times the amount in the atmosphere and much more than in vegetation.
Researchers have found that rising temperatures can lead to population booms in soil microbes, converting stored carbon into carbon dioxide. In laboratory experiments heating soil, fungi-rich soils released less carbon dioxide than other soils.
Following carbon dioxide (CO2) absorption from the atmosphere, plants deposit organic matter into the soil. This organic matter, derived from decaying plant material and root systems, is rich in carbon compounds. Microorganisms in the soil break down this organic matter, and in the process, some of the carbon becomes further stabilized in the soil as humus - a process known as humification.
On a global basis, it is estimated that soil contains about 2,500 gigatons of carbon.This is greater than 3-fold the carbon found in the atmosphere and 4-fold of that found in living plants and animals. About 70% of the global soil organic carbon in non-permafrost areas is found in the deeper soil within the upper metre and is stabilized by mineral-organic associations.
Carbon farming
Prairies
Prairie restoration is a conservation effort to restore prairie lands that were destroyed due to industrial, agricultural, commercial, or residential development. The primary aim is to return areas and ecosystems to their previous state before their depletion. The mass of SOC able to be stored in these restored plots is typically greater than the previous crop, acting as a more effective carbon sink.
Biochar
Biochar is charcoal created by pyrolysis of biomass waste. The resulting material is added to a landfill or used as a soil improver to create terra preta. Adding biochar may increase the soil-C stock for the long term and so mitigate global warming by offsetting the atmospheric C (up to 9.5 Gigatons C annually). In the soil, the biochar carbon is unavailable for oxidation to and consequential atmospheric release. However concerns have been raised about biochar potentially accelerating release of the carbon already present in the soil.
Terra preta, an anthropogenic, high-carbon soil, is also being investigated as a sequestration mechanism. By pyrolysing biomass, about half of its carbon can be reduced to charcoal, which can persist in the soil for centuries, and makes a useful soil amendment, especially in tropical soils (biochar or agrichar).
Burial of biomass
Burying biomass (such as trees) directly mimics the natural processes that created fossil fuels. The global potential for carbon sequestration using wood burial is estimated to be 10 ± 5 GtC/yr and largest rates in tropical forests (4.2 GtC/yr), followed by temperate (3.7 GtC/yr) and boreal forests (2.1 GtC/yr). In 2008, Ning Zeng of the University of Maryland estimated 65 GtC lying on the floor of the world's forests as coarse woody material which could be buried and costs for wood burial carbon sequestration run at US$50/tC which is much lower than carbon capture from e.g. power plant emissions. CO2 fixation into woody biomass is a natural process carried out through photosynthesis. This is a nature-based solution and methods being trialled include the use of "wood vaults" to store the wood-containing carbon under oxygen-free conditions.
In 2022, a certification organization published methodologies for biomass burial. Other biomass storage proposals have included the burial of biomass deep underwater, including at the bottom of the Black Sea.
Geological carbon sequestration
Underground storage in suitable geologic formations
Geological sequestration refers to the storage of CO2 underground in depleted oil and gas reservoirs, saline formations, or deep, coal beds unsuitable for mining.
Once CO2 is captured from a point source, such as a cement factory, it can be compressed to ≈100 bar into a supercritical fluid. In this form, the CO2 could be transported via pipeline to the place of storage. The CO2 could then be injected deep underground, typically around , where it would be stable for hundreds to millions of years. Under these storage conditions, the density of supercritical CO2 is 600 to 800 kg/m3.
The important parameters in determining a good site for carbon storage are: rock porosity, rock permeability, absence of faults, and geometry of rock layers. The medium in which the CO2 is to be stored ideally has a high porosity and permeability, such as sandstone or limestone. Sandstone can have a permeability ranging from 1 to 10−5 Darcy, with a porosity as high as ≈30%. The porous rock must be capped by a layer of low permeability which acts as a seal, or caprock, for the CO2. Shale is an example of a very good caprock, with a permeability of 10−5 to 10−9 Darcy. Once injected, the CO2 plume will rise via buoyant forces, since it is less dense than its surroundings. Once it encounters a caprock, it will spread laterally until it encounters a gap. If there are fault planes near the injection zone, there is a possibility the CO2 could migrate along the fault to the surface, leaking into the atmosphere, which would be potentially dangerous to life in the surrounding area. Another risk related to carbon sequestration is induced seismicity. If the injection of CO2 creates pressures underground that are too high, the formation will fracture, potentially causing an earthquake.
Structural trapping is considered the principal storage mechanism, impermeable or low permeability rocks such as mudstone, anhydrite, halite, or tight carbonates act as a barrier to the upward buoyant migration of CO2, resulting in the retention of CO2 within a storage formation. While trapped in a rock formation, CO2 can be in the supercritical fluid phase or dissolve in groundwater/brine. It can also react with minerals in the geologic formation to become carbonates.
Mineral sequestration
Mineral sequestration aims to trap carbon in the form of solid carbonate salts. This process occurs slowly in nature and is responsible for the deposition and accumulation of limestone over geologic time. Carbonic acid in groundwater slowly reacts with complex silicates to dissolve calcium, magnesium, alkalis and silica and leave a residue of clay minerals. The dissolved calcium and magnesium react with bicarbonate to precipitate calcium and magnesium carbonates, a process that organisms use to make shells. When the organisms die, their shells are deposited as sediment and eventually turn into limestone. Limestones have accumulated over billions of years of geologic time and contain much of Earth's carbon. Ongoing research aims to speed up similar reactions involving alkali carbonates.
Zeolitic imidazolate frameworks (ZIFs) are metal–organic frameworks similar to zeolites. Because of their porosity, chemical stability and thermal resistance, ZIFs are being examined for their capacity to capture carbon dioxide.
Mineral carbonation
CO2 exothermically reacts with metal oxides, producing stable carbonates (e.g. calcite, magnesite). This process (CO2-to-stone) occurs naturally over periods of years and is responsible for much surface limestone. Olivine is one such metal oxide. Rocks rich in metal oxides that react with CO2, such as MgO and CaO as contained in basalts, have been proven as a viable means to achieve carbon-dioxide mineral storage. The reaction rate can in principle be accelerated with a catalyst or by increasing pressures, or by mineral pre-treatment, although this method can require additional energy.
Ultramafic mine tailings are a readily available source of fine-grained metal oxides that could serve this purpose. Accelerating passive CO2 sequestration via mineral carbonation may be achieved through microbial processes that enhance mineral dissolution and carbonate precipitation.
Carbon, in the form of can be removed from the atmosphere by chemical processes, and stored in stable carbonate mineral forms. This process (-to-stone) is known as "carbon sequestration by mineral carbonation" or mineral sequestration. The process involves reacting carbon dioxide with abundantly available metal oxides – either magnesium oxide (MgO) or calcium oxide (CaO) – to form stable carbonates. These reactions are exothermic and occur naturally (e.g., the weathering of rock over geologic time periods).
CaO + →
MgO + →
Calcium and magnesium are found in nature typically as calcium and magnesium silicates (such as forsterite and serpentinite) and not as binary oxides. For forsterite and serpentine the reactions are:
+ 2 → 2 +
+ 3 → 3 + 2 + 2
These reactions are slightly more favorable at low temperatures. This process occurs naturally over geologic time frames and is responsible for much of the Earth's surface limestone. The reaction rate can be made faster however, by reacting at higher temperatures and/or pressures, although this method requires some additional energy. Alternatively, the mineral could be milled to increase its surface area, and exposed to water and constant abrasion to remove the inert silica as could be achieved naturally by dumping olivine in the high energy surf of beaches.
When is dissolved in water and injected into hot basaltic rocks underground it has been shown that the reacts with the basalt to form solid carbonate minerals. A test plant in Iceland started up in October 2017, extracting up to 50 tons of CO2 a year from the atmosphere and storing it underground in basaltic rock.
Sequestration in oceans
Several start-ups are trying to do this at scale.
Marine carbon pumps
The ocean naturally sequesters carbon through different processes. The solubility pump moves carbon dioxide from the atmosphere into the surface ocean where it reacts with water molecules to form carbonic acid. The solubility of carbon dioxide increases with decreasing water temperatures. Thermohaline circulation moves dissolved carbon dioxide to cooler waters where it is more soluble, increasing carbon concentrations in the ocean interior. The biological pump moves dissolved carbon dioxide from the surface ocean to the ocean's interior through the conversion of inorganic carbon to organic carbon by photosynthesis. Organic matter that survives respiration and remineralization can be transported through sinking particles and organism migration to the deep ocean.
The low temperatures, high pressure, and reduced oxygen levels in the deep sea slow down decomposition processes, preventing the rapid release of carbon back into the atmosphere and acting as a long-term storage reservoir.
Vegetated coastal ecosystems
Seaweed farming and algae
Seaweed grows in shallow and coastal areas, and captures significant amounts of carbon that can be transported to the deep ocean by oceanic mechanisms; seaweed reaching the deep ocean sequester carbon and prevent it from exchanging with the atmosphere over millennia. Growing seaweed offshore with the purpose of sinking the seaweed in the depths of the sea to sequester carbon has been suggested. In addition, seaweed grows very fast and can theoretically be harvested and processed to generate biomethane, via anaerobic digestion to generate electricity, via cogeneration/CHP or as a replacement for natural gas. One study suggested that if seaweed farms covered 9% of the ocean they could produce enough biomethane to supply Earth's equivalent demand for fossil fuel energy, remove 53 gigatonnes of per year from the atmosphere and sustainably produce 200 kg per year of fish, per person, for 10 billion people.Ideal species for such farming and conversion include Laminaria digitata, Fucus serratus and Saccharina latissima.
Both macroalgae and microalgae are being investigated as possible means of carbon sequestration. Marine phytoplankton perform half of the global photosynthetic CO2 fixation (net global primary production of ~50 Pg C per year) and half of the oxygen production despite amounting to only ~1% of global plant biomass.
Because algae lack the complex lignin associated with terrestrial plants, the carbon in algae is released into the atmosphere more rapidly than carbon captured on land. Algae have been proposed as a short-term storage pool of carbon that can be used as a feedstock for the production of various biogenic fuels.
Large-scale seaweed farming could sequester significant amounts of carbon. Wild seaweed will sequester large amount of carbon through dissolved particles of organic matter being transported to deep ocean seafloors where it will become buried and remain for long periods of time. With respect to carbon farming, the potential growth of seaweed for carbon farming would see the harvested seaweed transported to the deep ocean for long-term burial. Seaweed farming occurs mostly in the Asian Pacific coastal areas where it has been a rapidly increasing market. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" on seaweed farming as a mitigation tactic.
Ocean fertilization
Artificial upwelling
Artificial upwelling or downwelling is an approach that would change the mixing layers of the ocean. Encouraging various ocean layers to mix can move nutrients and dissolved gases around. Mixing may be achieved by placing large vertical pipes in the oceans to pump nutrient rich water to the surface, triggering blooms of algae, which store carbon when they grow and export carbon when they die. This produces results somewhat similar to iron fertilization. One side-effect is a short-term rise in , which limits its attractiveness.
Mixing layers involve transporting the denser and colder deep ocean water to the surface mixed layer. As the ocean temperature decreases with depth, more carbon dioxide and other compounds are able to dissolve in the deeper layers. This can be induced by reversing the oceanic carbon cycle through the use of large vertical pipes serving as ocean pumps, or a mixer array. When the nutrient rich deep ocean water is moved to the surface, algae bloom occurs, resulting in a decrease in carbon dioxide due to carbon intake from phytoplankton and other photosynthetic eukaryotic organisms. The transfer of heat between the layers will also cause seawater from the mixed layer to sink and absorb more carbon dioxide. This method has not gained much traction as algae bloom harms marine ecosystems by blocking sunlight and releasing harmful toxins into the ocean. The sudden increase in carbon dioxide on the surface level will also temporarily decrease the pH of the seawater, impairing the growth of coral reefs. The production of carbonic acid through the dissolution of carbon dioxide in seawater hinders marine biogenic calcification and causes major disruptions to the oceanic food chain.
Basalt storage
Carbon dioxide sequestration in basalt involves the injecting of into deep-sea formations. The first mixes with seawater and then reacts with the basalt, both of which are alkaline-rich elements. This reaction results in the release of and ions forming stable carbonate minerals.
Underwater basalt offers a good alternative to other forms of oceanic carbon storage because it has a number of trapping measures to ensure added protection against leakage. These measures include "geochemical, sediment, gravitational and hydrate formation." Because hydrate is denser than in seawater, the risk of leakage is minimal. Injecting the at depths greater than ensures that the has a greater density than seawater, causing it to sink.
One possible injection site is Juan de Fuca Plate. Researchers at the Lamont–Doherty Earth Observatory found that this plate at the western coast of the United States has a possible storage capacity of 208 gigatons. This could cover the entire current U.S. carbon emissions for over 100 years (as of 2009).
This process is undergoing tests as part of the CarbFix project, resulting in 95% of the injected 250 tonnes of CO2 to solidify into calcite in two years, using 25 tonnes of water per tonne of CO2.
Mineralization and deep sea sediments
Similar to mineralization processes that take place within rocks, mineralization can also occur under the sea. The rate of dissolution of carbon dioxide from atmosphere to oceanic regions is determined by the circulation period of the ocean and buffering ability of subducting surface water. Researchers have demonstrated that the carbon dioxide marine storage at several kilometers depth could be viable for up to 500 years, but is dependent on injection site and conditions. Several studies have shown that although it may fix carbon dioxide effectively, carbon dioxide may be released back to the atmosphere over time. However, this is unlikely for at least a few more centuries. The neutralization of CaCO3, or balancing the concentration of CaCO3 on the seafloor, land and in the ocean, can be measured on a timescale of thousands of years. More specifically, the predicted time is 1700 years for ocean and approximately 5000 to 6000 years for land. Further, the dissolution time for CaCO3 can be improved by injecting near or downstream of the storage site.
In addition to carbon mineralization, another proposal is deep sea sediment injection. It injects liquid carbon dioxide at least below the surface directly into ocean sediments to generate carbon dioxide hydrate. Two regions are defined for exploration: 1) the negative buoyancy zone (NBZ), which is the region between liquid carbon dioxide denser than surrounding water and where liquid carbon dioxide has neutral buoyancy, and 2) the hydrate formation zone (HFZ), which typically has low temperatures and high pressures. Several research models have shown that the optimal depth of injection requires consideration of intrinsic permeability and any changes in liquid carbon dioxide permeability for optimal storage. The formation of hydrates decreases liquid carbon dioxide permeability, and injection below HFZ is more energetically favored than within the HFZ. If the NBZ is a greater column of water than the HFZ, the injection should happen below the HFZ and directly to the NBZ. In this case, liquid carbon dioxide will sink to the NBZ and be stored below the buoyancy and hydrate cap. Carbon dioxide leakage can occur if there is dissolution into pore fluidor via molecular diffusion. However, this occurs over thousands of years.
Adding bases to neutralize acids
Carbon dioxide forms carbonic acid when dissolved in water, so ocean acidification is a significant consequence of elevated carbon dioxide levels, and limits the rate at which it can be absorbed into the ocean (the solubility pump). A variety of different bases have been suggested that could neutralize the acid and thus increase absorption. For example, adding crushed limestone to oceans enhances the absorption of carbon dioxide. Another approach is to add sodium hydroxide to oceans which is produced by electrolysis of salt water or brine, while eliminating the waste hydrochloric acid by reaction with a volcanic silicate rock such as enstatite, effectively increasing the rate of natural weathering of these rocks to restore ocean pH.
Single-step carbon sequestration and storage
Single-step carbon sequestration and storage is a saline water-based mineralization technology extracting carbon dioxide from seawater and storing it in the form of solid minerals.
Abandoned ideas
Direct deep-sea carbon dioxide injection
It was once suggested that CO2 could be stored in the oceans by direct injection into the deep ocean and storing it there for some centuries. At the time, this proposal was called "ocean storage" but more precisely it was known as "direct deep-sea carbon dioxide injection". However, the interest in this avenue of carbon storage has much reduced since about 2001 because of concerns about the unknown impacts on marine life, high costs and concerns about its stability or permanence. The "IPCC Special Report on Carbon Dioxide Capture and Storage" in 2005 did include this technology as an option. However, the IPCC Fifth Assessment Report in 2014 no longer mentioned the term "ocean storage" in its report on climate change mitigation methods. The most recent IPCC Sixth Assessment Report in 2022 also no longer includes any mention of "ocean storage" in its "Carbon Dioxide Removal taxonomy".
Costs
Cost of carbon sequestration (not including capture and transport) varies but is below US$10 per tonne in some cases where onshore storage is available. For example Carbfix cost is around US$25 per tonne of CO2. A 2020 report estimated sequestration in forests (so including capture) at US$35 for small quantities to US$280 per tonne for 10% of the total required to keep to 1.5 C warming. But there is risk of forest fires releasing the carbon.
See also
Carbon budget
Mycorrhizal fungi and soil carbon storage
References
Carbon dioxide removal
Emissions reduction
Forestry and the environment
Photosynthesis
Reforestation
Sustainable food system | Carbon sequestration | [
"Chemistry",
"Engineering",
"Biology"
] | 7,761 | [
"Biochemistry",
"Emissions reduction",
"Geoengineering",
"Photosynthesis",
"Greenhouse gases",
"Carbon capture and storage"
] |
4,538,124 | https://en.wikipedia.org/wiki/Integrated%20gasification%20combined%20cycle | An integrated gasification combined cycle (IGCC) is a technology using a high pressure gasifier to turn coal and other carbon based fuels into pressurized gas—synthesis gas (syngas). It can then remove impurities from the syngas prior to the electricity generation cycle. Some of these pollutants, such as sulfur, can be turned into re-usable byproducts through the Claus process. This results in lower emissions of sulfur dioxide, particulates, mercury, and in some cases carbon dioxide. With additional process equipment, a water-gas shift reaction can increase gasification efficiency and reduce carbon monoxide emissions by converting it to carbon dioxide. The resulting carbon dioxide from the shift reaction can be separated, compressed, and stored through sequestration. Excess heat from the primary combustion and syngas fired generation is then passed to a steam cycle, similar to a combined cycle gas turbine. This process results in improved thermodynamic efficiency, compared to conventional pulverized coal combustion.
Significance
Coal can be found in abundance in the USA and many other countries and its price has remained relatively constant in recent years. Of the traditional hydrocarbon fuels - oil, coal, and natural gas - coal is used as a feedstock for 40% of global electricity generation. Fossil fuel consumption and its contribution to large-scale emissions is becoming a pressing issue because of the adverse effects of climate change. In particular, coal contains more CO2 per BTU than oil or natural gas and is responsible for 43% of CO2 emissions from fuel combustion. Thus, the lower emissions that IGCC technology allows through gasification and pre-combustion carbon capture is discussed as a way to addressing aforementioned concerns.
Operations
Below is a schematic flow diagram of an IGCC plant:
The gasification process can produce syngas from a wide variety of carbon-containing feedstocks, such as high-sulfur coal, heavy petroleum residues, and biomass.
The plant is called integrated because (1) the syngas produced in the gasification section is used as fuel for the gas turbine in the combined cycle and (2) the steam produced by the syngas coolers in the gasification section is used by the steam turbine in the combined cycle.
In this example the syngas produced is used as fuel in a gas turbine which produces electrical power. In a normal combined cycle, so-called "waste heat" from the gas turbine exhaust is used in a Heat Recovery Steam Generator (HRSG) to make steam for the steam turbine cycle. An IGCC plant improves the overall process efficiency by adding the higher-temperature steam produced by the gasification process to the steam turbine cycle. This steam is then used in steam turbines to produce additional electrical power.
IGCC plants are advantageous in comparison to conventional coal power plants due to their high thermal efficiency, low non-carbon greenhouse gas emissions, and capability to process low grade coal. The disadvantages include higher capital and maintenance costs, and the amount of released without pre-combustion capture.
Process overview
The solid coal is gasified to produce syngas, or synthetic gas. Syngas is synthesized by gasifying coal in a closed pressurized reactor with a shortage of oxygen. The shortage of oxygen ensures that coal is broken down by the heat and pressure as opposed to burning completely. The chemical reaction between coal and oxygen produces a product that is a mixture of carbon and hydrogen, or syngas. CxHy + (x/2)O2 → (x)CO + (y/2)H2
The heat from the production of syngas is used to produce steam from cooling water which is then used for steam turbine electricity production.
The syngas must go through a pre-combustion separation process to remove CO2 and other impurities to produce a more purified fuel. Three steps are necessary for the separation of impurities:
Water-gas-shift reaction. The reaction that occurs in a water-gas-shift reactor is CO + H2O CO2 + H2. This produces a syngas with a higher composition of hydrogen fuel which is more efficient for burning later in combustion.
Physical separation process. This can be done through various mechanisms such as absorption, adsorption or membrane separation.
Drying, compression and storage/shipping.
The resulting syngas fuels a combustion turbine that produces electricity. At this stage the syngas is fairly pure H2.
Benefits and drawbacks
A major drawback of using coal as a fuel source is the emission of carbon dioxide and pollutants, including sulfur dioxide, nitrogen oxide, mercury, and particulates. Almost all coal-fired power plants use pulverized coal combustion, which grinds the coal to increase the surface area, burns it to make steam, and runs the steam through a turbine to generate electricity. Pulverized coal plants can only capture carbon dioxide after combustion when it is diluted and harder to separate. In comparison, gasification in IGCC allows for separation and capture of the concentrated and pressurized carbon dioxide before combustion. Syngas cleanup includes filters to remove bulk particulates, scrubbing to remove fine particulates, and solid adsorbents for mercury removal. Additionally, hydrogen gas is used as fuel, which produces no pollutants under combustion.
IGCC also consumes less water than traditional pulverized coal plants. In a pulverized coal plant, coal is burned to produce steam, which is then used to create electricity using a steam turbine. Then steam exhaust must then be condensed with cooling water, and water is lost by evaporation. In IGCC, water consumption is reduced by combustion in a gas turbine, which uses the generated heat to expand air and drive the turbine. Steam is only used to capture the heat from the combustion turbine exhaust for use in a secondary steam turbine. Currently, the major drawback is the high capital cost compared to other forms of power production.
Installations
The DOE Clean Coal Demonstration Project helped construct 3 IGCC plants: Edwarsport Power Station in Edwardsport, Indiana, Polk Power Station in Tampa, Florida (online 1996), and Pinon Pine in Reno, Nevada. In the Reno demonstration project, researchers found that then-current IGCC technology would not work more than 300 feet (100m) above sea level. The DOE report in reference 3 however makes no mention of any altitude effect, and most of the problems were associated with the solid waste extraction system. The Wabash River and Polk Power stations are currently operating, following resolution of demonstration start-up problems, but the Piñon Pine project encountered significant problems and was abandoned.
The US DOE's Clean Coal Power Initiative (CCPI Phase 2) selected the Kemper Project as one of two projects to demonstrate the feasibility of low emission coal-fired power plants. Mississippi Power began construction on the Kemper Project in Kemper County, Mississippi, in 2010 and is poised to begin operation in 2016, though there have been many delays. In March, the projected date was further pushed back from early 2016 to August 31, 2016, adding $110 million to the total and putting the project 3 years behind schedule. The electrical plant is a flagship Carbon Capture and Storage (CCS) project that burns lignite coal and utilizes pre-combustion IGCC technology with a projected 65% emission capture rate.
The first generation of IGCC plants polluted less than contemporary coal-based technology, but also polluted water; for example, the Wabash River Plant was out of compliance with its water permit during 1998–2001
because it emitted arsenic, selenium and cyanide. The Wabash River Generating Station is now wholly owned and operated by the Wabash River Power Association.
IGCC is now touted as capture ready and could potentially be used to capture and store carbon dioxide. (See FutureGen)Poland's Kędzierzyn will soon host a Zero-Emission Power & Chemical Plant that combines coal gasification technology with Carbon Capture & Storage (CCS). This installation had been planned, but there has been no information about it since 2009. Other operating IGCC plants in existence around the world are the Alexander (formerly Buggenum) in the Netherlands, Puertollano in Spain, and JGC in Japan.
The Texas Clean Energy project planned to build a 400 MW IGCC facility that would incorporate carbon capture, utilization and storage (CCUS) technology. The project would have been the first coal power plant in the United States to combine IGCC and 90% carbon capture and storage. The sponsor Summit Power filed for bankruptcy in 2017.
There are several advantages and disadvantages when compared to conventional post combustion carbon capture and various variations
Cost and reliability
A key issue in implementing IGCC is its high capital cost, which prevents it from competing with other power plant technologies. Currently, ordinary pulverized coal plants are the lowest cost power plant option. The advantage of IGCC comes from the ease of retrofitting existing power plants that could offset the high capital cost. In a 2007 model, IGCC with CCS is the lowest-cost system in all cases. This model compared estimations of levelized cost of electricity, showing IGCC with CCS to cost 71.9 $US2005/MWh, pulverized coal with CCS to cost 88 $US2005/MWh, and natural gas combined cycle with CCS to cost 80.6 $US2005/MWh. The levelized cost of electricity was noticeably sensitive to the price of natural gas and the inclusion of carbon storage and transport costs.
The potential benefit of retrofitting has so far, not offset the cost of IGCC with carbon capture technology. A 2013 report by the U.S. Energy Information Administration demonstrates that the overnight cost of IGCC with CCS has increased 19% since 2010. Amongst the three power plant types, pulverized coal with CCS has an overnight capital cost of $5,227 (2012 dollars)/kW, IGCC with CCS has an overnight capital cost of $6,599 (2012 dollars)/kW, and natural gas combined cycle with CCS has an overnight capital cost of $2,095 (2012 dollars)/kW. Pulverized coal and NGCC costs did not change significantly since 2010. The report further relates that the 19% increase in IGCC cost is due to recent information from IGCC projects that have gone over budget and cost more than expected.
Recent testimony in regulatory proceedings show the cost of IGCC to be twice that predicted by Goddell, from $96 to 104/MWh. That's before addition of carbon capture and sequestration (sequestration has been a mature technology at both Weyburn in Canada (for enhanced oil recovery) and Sleipner in the North Sea at a commercial scale for the past ten years)—capture at a 90% rate is expected to have a $30/MWh additional cost.
Wabash River was down repeatedly for long stretches due to gasifier problems. The gasifier problems have not been remedied—subsequent projects, such as Excelsior's Mesaba Project, have a third gasifier and train built in. However, the past year has seen Wabash River running reliably, with availability comparable to or better than other technologies.
The Polk County IGCC has design problems. First, the project was initially shut down because of corrosion in the slurry pipeline that fed slurried coal from the rail cars into the gasifier. A new coating for the pipe was developed. Second, the thermocoupler was replaced in less than two years; an indication that the gasifier had problems with a variety of feedstocks; from bituminous to sub-bituminous coal. The gasifier was designed to also handle lower rank lignites. Third, unplanned down time on the gasifier because of refractory liner problems, and those problems were expensive to repair. The gasifier was originally designed in Italy to be half the size of what was built at Polk. Newer ceramic materials may assist in improving gasifier performance and longevity. Understanding the operating problems of the current IGCC plant is necessary to improve the design for the IGCC plant of the future. (Polk IGCC Power Plant, https://web.archive.org/web/20151228085513/http://www.clean-energy.us/projects/polk_florida.html.) Keim, K., 2009, IGCC A Project on Sustainability Management Systems for Plant Re-Design and Re-Image. This is an unpublished paper from Harvard University)
General Electric is currently designing an IGCC model plant that should introduce greater reliability. GE's model features advanced turbines optimized for the coal syngas. Eastman's industrial gasification plant in Kingsport, TN uses a GE Energy solid-fed gasifier. Eastman, a fortune 500 company, built the facility in 1983 without any state or federal subsidies and turns a profit.
There are several refinery-based IGCC plants in Europe that have demonstrated good availability (90-95%) after initial shakedown periods. Several factors help this performance:
None of these facilities use advanced technology (F type) gas turbines.
All refinery-based plants use refinery residues, rather than coal, as the feedstock. This eliminates coal handling and coal preparation equipment and its problems. Also, there is a much lower level of ash produced in the gasifier, which reduces cleanup and downtime in its gas cooling and cleaning stages.
These non-utility plants have recognized the need to treat the gasification system as an up-front chemical processing plant, and have reorganized their operating staff accordingly.
Another IGCC success story has been the 250 MW Buggenum plant in The Netherlands, which was commissioned in 1994 and closed in 2013, had good availability. This coal-based IGCC plant was originally designed to use up to 30% biomass as a supplemental feedstock. The owner, NUON, was paid an incentive fee by the government to use the biomass. NUON has constructed a 1,311 MW IGCC plant in the Netherlands, comprising three 437 MW CCGT units. The Nuon Magnum IGCC power plant was commissioned in 2011, and was officially opened in June 2013. Mitsubishi Heavy Industries has been awarded to construct the power plant. Following a deal with environmental organizations, NUON has been prohibited from using the Magnum plant to burn coal and biomass, until 2020. Because of high gas prices in the Netherlands, two of the three units are currently offline, whilst the third unit sees only low usage levels. The relatively low 59% efficiency of the Magnum plant means that more efficient CCGT plants (such as the Hemweg 9 plant) are preferred to provide (backup) power.
A new generation of IGCC-based coal-fired power plants has been proposed, although none is yet under construction. Projects are being developed by AEP, Duke Energy, and Southern Company in the US, and in Europe by ZAK/PKE, Centrica (UK), E.ON and RWE (both Germany) and NUON (Netherlands). In Minnesota, the state's Dept. of Commerce analysis found IGCC to have the highest cost, with an emissions profile not significantly better than pulverized coal. In Delaware, the Delmarva and state consultant analysis had essentially the same results.
The high cost of IGCC is the biggest obstacle to its integration in the power market; however, most energy executives recognize that carbon regulation is coming soon. Bills requiring carbon reduction are being proposed again both the House and the Senate, and with the Democratic majority it seems likely that with the next President there will be a greater push for carbon regulation. The Supreme Court decision requiring the EPA to regulate carbon (Commonwealth of Massachusetts et al. v. Environmental Protection Agency et al.)[20] also speaks to the likelihood of future carbon regulations coming sooner, rather than later. With carbon capture, the cost of electricity from an IGCC plant would increase approximately 33%. For a natural gas CC, the increase is approximately 46%. For a pulverized coal plant, the increase is approximately 57%. This potential for less expensive carbon capture makes IGCC an attractive choice for keeping low cost coal an available fuel source in a carbon constrained world. However, the industry needs a lot more experience to reduce the risk premium. IGCC with CCS requires some sort of mandate, higher carbon market price, or regulatory framework to properly incentivize the industry.
In Japan, electric power companies, in conjunction with Mitsubishi Heavy Industries has been operating a 200 t/d IGCC pilot plant since the early '90s. In September 2007, they started up a 250 MW demo plant in Nakoso. It runs on air-blown (not oxygen) dry feed coal only. It burns PRB coal with an unburned carbon content ratio of <0.1% and no detected leaching of trace elements. It employs not only F type turbines but G type as well. (see gasification.org link below)
Next generation IGCC plants with CO2 capture technology will be expected to have higher thermal efficiency and to hold the cost down because of simplified systems compared to conventional IGCC. The main feature is that instead of using oxygen and nitrogen to gasify coal, they use oxygen and CO2. The main advantage is that it is possible to improve the performance of cold gas efficiency and to reduce the unburned carbon (char).
As a reference for powerplant efficiency:
With Frame E gas turbine, 30bar quench gas cooling, Cold Temperature Gas Cleaning and 2 level HRSC it is possible to achieve around 38% energy efficiency.
With Frame F gas turbine, 60 bar quench gasifier, Cold Temperature Gas Cleaning and 3 level+RH HRSC it is possible to achieve around 45% energy efficiency.
Latest development of Frame G gas turbines, ASU air integration, High temperature desulfurization may shift up performance even further.
The CO2 extracted from gas turbine exhaust gas is utilized in this system. Using a closed gas turbine system capable of capturing the CO2 by direct compression and liquefication obviates the need for a separation and capture system.
CO2 capture in IGCC
Pre-combustion CO2 removal is much easier than CO2 removal from flue gas in post-combustion capture due to the high concentration of CO2 after the water-gas-shift reaction and the high pressure of the syngas. During pre-combustion in IGCC, the partial pressure of CO2 is nearly 1000 times higher than in post-combustion flue gas. Due to the high concentration of CO2 pre-combustion, physical solvents, such as Selexol and Rectisol, are preferred for the removal of CO2 vs that of chemical solvents. Physical solvents work by absorbing the acid gases without the need of a chemical reaction as in traditional amine based solvents. The solvent can then be regenerated, and the CO2 desorbed, by reducing the pressure. The biggest obstacle with physical solvents is the need for the syngas to be cooled before separation and reheated afterwards for combustion. This requires energy and decreases overall plant efficiency.
Testing
National and international test codes are used to standardize the procedures and definitions used to test IGCC Power Plants. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the plant and associated systems. In the United States, The American Society of Mechanical Engineers published the Performance Test Code for IGCC Power Generation Plants (PTC 47) in 2006 which provides procedures for the determination of quantity and quality of fuel gas by its flow rate, temperature, pressure, composition, heating value, and its content of contaminants.
IGCC emission controversy
In 2007, the New York State Attorney General's office demanded full disclosure of "financial risks from greenhouse gases" to the shareholders of electric power companies proposing the development of IGCC coal-fired power plants. "Any one of the several new or likely regulatory initiatives for CO2 emissions from power plants - including state carbon controls, EPA's regulations under the Clean Air Act, or the enactment of federal global warming legislation - would add a significant cost to carbon-intensive coal generation"; U.S. Senator Hillary Clinton from New York has proposed that this full risk disclosure be required of all publicly traded power companies nationwide. This honest disclosure has begun to reduce investor interest in all types of existing-technology coal-fired power plant development, including IGCC.
Senator Harry Reid (Majority Leader of the 2007/2008 U.S. Senate) told the 2007 Clean Energy Summit that he will do everything he can to stop construction of proposed new IGCC coal-fired electric power plants in Nevada. Reid wants Nevada utility companies to invest in solar energy, wind energy and geothermal energy instead of coal technologies. Reid stated that global warming is a reality, and just one proposed coal-fired plant would contribute to it by burning seven million tons of coal a year. The long-term healthcare costs would be far too high, he claimed (no source attributed). "I'm going to do everything I can to stop these plants.", he said. "There is no clean coal technology. There is cleaner coal technology, but there is no clean coal technology."
One of the most efficient ways to treat the H2S gas from an IGCC plant is by converting it into sulphuric acid in a wet gas sulphuric acid process WSA process. However, the majority of the H2S treating plants utilize the modified Claus process, as the sulphur market infrastructure and the transportation costs of sulphuric acid versus sulphur are in favour of sulphur production.
See also
Relative cost of electricity generated by different sources
Environmental impact of the coal industry
Integrated Gasification Fuel Cell Cycle
References
External links
Huntstown: Ireland's most efficient power plant @ Siemens Power Generation website
Natural Gas Combined-cycle Gas Turbine Power Plants Northwest Power Planning Council, New Resource Characterization for the Fifth Power Plan, August 2002
Combined cycle solar power
Thermodynamic cycles
Chemical processes
Power station technology
Energy conversion | Integrated gasification combined cycle | [
"Chemistry"
] | 4,587 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
4,538,295 | https://en.wikipedia.org/wiki/Australian%20Synchrotron | The Australian Synchrotron is a 3 GeV national synchrotron radiation facility located in Clayton, in the south-eastern suburbs of Melbourne, Victoria. The facility opened in 2007, and is operated by the Australian Nuclear Science and Technology Organisation.
ANSTO's Australian Synchrotron is a light source facility (in contrast to a collider), which uses particle accelerators to produce a beam of high energy electrons that are boosted to nearly the speed of light and directed into a storage ring where they circulate for many hours or even days at a time. As the path of these electrons are deflected in the storage ring by either bending magnets or insertion devices, they emit synchrotron light. The light is channelled to experimental endstations containing specialised equipment, enabling a range of research applications including high resolution imagery that is not possible under normal laboratory conditions.
ANSTO's Australian Synchrotron supports the research needs of Australia's major universities and research centres, and businesses ranging from small-to-medium enterprises to multinational companies. During 2014–15 the Australian Synchrotron supported more than 4,300 researcher visits and close to 1,000 experiments in areas such as medicine, agriculture, environment, defence, transport, advanced manufacturing and mining.
In 2015, the Australian Government announced a ten-year, million investment in operations through ANSTO, Australia's Nuclear Science and Technology Organisation . A 1.5 MW solar power system on the roof is expected to save $2 million in electricity costs over 5 years.
In 2020, it was used to help map the molecular structure of the COVID-19 virus, during the COVID-19 pandemic.
Accelerator systems
Electron gun
The electrons used to provide the synchrotron light are first produced at the electron gun, by thermionic emission from a heated metal cathode. The emitted electrons are then accelerated to an energy of 90 keV (kilo-electron volts) by a 90 kilovolt potential applied across the gun and make their way into the linear accelerator.
Linear accelerator
The linear accelerator (or linac) uses a series of RF cavities, operating at a frequency of 3 GHz, to accelerate the electron beam to an energy of 100 MeV, over a distance of around 15 metres. Due to the nature of this acceleration, the beam must be separated into discrete packets, or 'bunches'. The bunching process is done at the start of the linac, using several 'bunching' cavities. The linac can accelerate a beam once every second. Further along the linac quadrupole magnets are used to help focus the electron beam.
Booster synchrotron
The booster is an electron synchrotron which takes the 100 MeV beam from the linac and increases its energy to 3 GeV. The booster ring is 130 metres in circumference and contains a single 5-cell RF cavity (operating at 500 MHz) which provides energy to the electron beam. Acceleration of the beam is achieved by a simultaneous ramping up of the magnet strength and cavity fields. Each ramping cycle takes approximately 1 second (for a complete ramp up and down).
Storage ring
The storage ring is the final destination for the accelerated electrons. It is 216 metres in circumference and consists of 14 nearly identical sectors. Each sector consists of a straight section and an arc, with the arcs containing two dipole 'bending' magnets each. Each dipole magnet is a potential source of synchrotron light and most straight sections can also host an insertion device, giving the possibility of 30+ beamlines at the Australian Synchrotron. Two of the straight sections are used to host the storage ring 500 MHz RF cavities, which are essential for replacing the energy that the beam loses through synchrotron radiation. The storage ring also contains a large number of quadrupole and sextupole magnets used for beam focusing and chromaticity corrections. The ring is designed to hold 200 mA of stored current with a beam lifetime of over 20 hours.
Vacuum systems
The electron beam is kept within a very high vacuum at all times during the acceleration process and within the storage ring. This vacuum is necessary as any beam collisions with gas molecules will quickly degrade the beam quality and reduce the lifetime of the beam. The vacuum is achieved by enclosing the beam in a stainless steel pipe system, with numerous vacuum pump systems continually working to keep the vacuum quality high. Pressure within the storage ring is typically around 10−13 bar (10 nPa).
Control system
Each digital and analogue I/O channel is associated with a database entry in a customised distributed open source database system called EPICS (Experimental Physics and Industrial Control System).
The condition of the system is monitored and controlled by connecting specialised GUIs to the specified database entries.
There are about 171,000 database entries (also known as process variables), many of which relate to the physical I/O. About 105,000 of these are permanently archived at intervals ranging from tenths of a seconds to minutes.
Some high level control of the physics-related parameters of the beam is provided through MATLAB which also provides data analysis tools and an interface with a computerised model of the accelerator.
Personnel and equipment protection is achieved through the use of PLC-based systems, which also transfer data to EPICS.
The Beamlines also use EPICS as the basis for their control.
Australian Synchrotron beamlines
Imaging and Medical Beamline (IMBL)
X-ray Fluorescence Microscopy (XFM) beamline
Macromolecular and Micro crystallography (MX1 and MX2) beamlines (Protein crystallography)
Infrared microscopy (IRM) beamline
Far Infrared, THz Spectroscopy (THz) beamline
Soft X-ray Spectroscopy (SXR) beamline
Small and Wide Angle X-ray Scattering (SAXS/WAXS) beamline
X-ray Absorption Spectroscopy (XAS) beamline
Powder diffraction (PD) beamline
Micro Computed Tomography (MCT)
Medium Energy X-ray Absorption Spectroscopy (MEX1 and MEX2)
Beamlines under construction (as of 2023)
Biological Small Angle Scattering (BioSAXS)
Advanced Diffraction and Scattering (ADS1 and ADS2)
X-ray Fluorescence NanoProbe (Nano)
High Performance Macromolecular Crystallography (MX3)
See also
List of synchrotron radiation facilities
References
External links
Australian Synchrotron website
Facility Status software – updated every minute
ANSTO, Australia's Nuclear Science and Technology Organisation website
Lightsources – website about the world's synchrotrons
'The Australian Synchrotron is great... but what does it do?' at The Conversation, March 2012.
Deconstruction of Australian Synchrotron in symmetry magazine (Fermilab/SLAC), May 2006
Research institutes in Australia
Synchrotron radiation facilities
Monash University
2007 establishments in Australia
Buildings and structures in the City of Monash
Research institutes established in 2007 | Australian Synchrotron | [
"Materials_science"
] | 1,453 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
4,538,599 | https://en.wikipedia.org/wiki/Radial%20distribution%20function | In statistical mechanics, the radial distribution function, (or pair correlation function) in a system of particles (atoms, molecules, colloids, etc.), describes how density varies as a function of distance from a reference particle.
If a given particle is taken to be at the origin O, and if is the average number density of particles, then the local time-averaged density at a distance from O is . This simplified definition holds for a homogeneous and isotropic system. A more general case will be considered below.
In simplest terms it is a measure of the probability of finding a particle at a distance of away from a given reference particle, relative to that for an ideal gas. The general algorithm involves determining how many particles are within a distance of and away from a particle. This general theme is depicted to the right, where the red particle is our reference particle, and the blue particles are those whose centers are within the circular shell, dotted in orange.
The radial distribution function is usually determined by calculating the distance between all particle pairs and binning them into a histogram. The histogram is then normalized with respect to an ideal gas, where particle histograms are completely uncorrelated. For three dimensions, this normalization is the number density of the system multiplied by the volume of the spherical shell, which symbolically can be expressed as .
Given a potential energy function, the radial distribution function can be computed either via computer simulation methods like the Monte Carlo method, or via the Ornstein–Zernike equation, using approximative closure relations like the Percus–Yevick approximation or the hypernetted-chain theory. It can also be determined experimentally, by radiation scattering techniques or by direct visualization for large enough (micrometer-sized) particles via traditional or confocal microscopy.
The radial distribution function is of fundamental importance since it can be used, using the Kirkwood–Buff solution theory, to link the microscopic details to macroscopic properties. Moreover, by the reversion of the Kirkwood–Buff theory, it is possible to attain the microscopic details of the radial distribution function from the macroscopic properties. The radial distribution function may also be inverted to predict the potential energy function using the Ornstein–Zernike equation or structure-optimized potential refinement.
Definition
Consider a system of particles in a volume (for an average number density ) and at a temperature (let us also define ; is the Boltzmann constant). The particle coordinates are , with . The potential energy due to the interaction between particles is and we do not consider the case of an externally applied field.
The appropriate averages are taken in the canonical ensemble , with the configurational integral, taken over all possible combinations of particle positions. The probability of an elementary configuration, namely finding particle 1 in , particle 2 in , etc. is given by
The total number of particles is huge, so that in itself is not very useful. However, one can also obtain the probability of a reduced configuration, where the positions of only particles are fixed, in , with no constraints on the remaining particles. To this end, one has to integrate () over the remaining coordinates :
.
If the particles are non-interacting, in the sense that the potential energy of each particle does not depend on any of the other particles, , then the partition function factorizes, and the probability of an elementary configuration decomposes with independent arguments to a product of single particle probabilities,
Note how for non-interacting particles the probability is symmetric in its arguments. This is not true in general, and the order in which the positions occupy the argument slots of matters. Given a set of positions, the way that the particles can occupy those positions is The probability that those positions ARE occupied is found by summing over all configurations in which a particle is at each of those locations. This can be done by taking every permutation, , in the symmetric group on objects, , to write . For fewer positions, we integrate over extraneous arguments, and include a correction factor to prevent overcounting,This quantity is called the n-particle density function. For indistinguishable particles, one could permute all the particle positions, , without changing the probability of an elementary configuration, , so that the n-particle density function reduces to Integrating the n-particle density gives the permutation factor , counting the number of ways one can sequentially pick particles to place at the positions out of the total particles. Now let's turn to how we interpret this functions for different values of .
For , we have the one-particle density. For a crystal it is a periodic function with sharp maxima at the lattice sites. For a non-interacting gas, it is independent of the position and equal to the overall number density, , of the system. To see this first note that in the volume occupied by the gas, and 0 everywhere else. The partition function in this case is
from which the definition gives the desired result
In fact, for this special case every n-particle density is independent of coordinates, and can be computed explicitlyFor , the non-interacting n-particle density is approximately . With this in hand, the n-point correlation function is defined by factoring out the non-interacting contribution, Explicitly, this definition reads where it is clear that the n-point correlation function is dimensionless.
Relations involving g(r)
Structure factor
The second-order correlation function is of special importance, as it is directly related (via a Fourier transform) to the structure factor of the system and can thus be determined experimentally using X-ray diffraction or neutron diffraction.
If the system consists of spherically symmetric particles, depends only on the relative distance between them, . We will drop the sub- and superscript: . Taking particle 0 as fixed at the origin of the coordinates, is the average number of particles (among the remaining ) to be found in the volume around the position .
We can formally count these particles and take the average via the expression , with the ensemble average, yielding:
where the second equality requires the equivalence of particles . The formula above is useful for relating to the static structure factor , defined by , since we have:
and thus:
, proving the Fourier relation alluded to above.
This equation is only valid in the sense of distributions, since is not normalized: , so that diverges as the volume , leading to a Dirac peak at the origin for the structure factor. Since this contribution is inaccessible experimentally we can subtract it from the equation above and redefine the structure factor as a regular function:
.
Finally, we rename and, if the system is a liquid, we can invoke its isotropy:
Compressibility equation
Evaluating () in and using the relation between the isothermal compressibility and the structure factor at the origin yields the compressibility equation:
Potential of mean force
It can be shown that the radial distribution function is related to the two-particle potential of mean force by:
In the dilute limit, the potential of mean force is the exact pair potential under which the equilibrium point configuration has a given .
Energy equation
If the particles interact via identical pairwise potentials: , the average internal energy per particle is:
Pressure equation of state
Developing the virial equation yields the pressure equation of state:
Thermodynamic properties in 3D
The radial distribution function is an important measure because several key thermodynamic properties, such as potential energy and pressure can be calculated from it.
For a 3-D system where particles interact via pairwise potentials, the potential energy of the system can be calculated as follows:
where N is the number of particles in the system, is the number density, is the pair potential.
The pressure of the system can also be calculated by relating the 2nd virial coefficient to . The pressure can be calculated as follows:
.
Note that the results of potential energy and pressure will not be as accurate as directly calculating these properties because of the averaging involved with the calculation of .
Approximations
For dilute systems (e.g. gases), the correlations in the positions of the particles that accounts for are only due to the potential engendered by the reference particle, neglecting indirect effects. In the first approximation, it is thus simply given by the Boltzmann distribution law:
If were zero for all – i.e., if the particles did not exert any influence on each other, then for all and the mean local density would be equal to the mean density : the presence of a particle at O would not influence the particle distribution around it and the gas would be ideal. For distances such that is significant, the mean local density will differ from the mean density , depending on the sign of (higher for negative interaction energy and lower for positive ).
As the density of the gas increases, the low-density limit becomes less and less accurate since a particle situated in experiences not only the interaction with the particle at O but also with the other neighbours, themselves influenced by the reference particle. This mediated interaction increases with the density, since there are more neighbours to interact with: it makes physical sense to write a density expansion of , which resembles the virial equation:
This similarity is not accidental; indeed, substituting () in the relations above for the thermodynamic parameters (Equations , and ) yields the corresponding virial expansions. The auxiliary function is known as the cavity distribution function. It has been shown that for classical fluids at a fixed density and a fixed positive temperature, the effective pair potential that generates a given under equilibrium is unique up to an additive constant, if it exists.
In recent years, some attention has been given to develop pair correlation functions for spatially-discrete data such as lattices or networks.
Experimental
One can determine indirectly (via its relation with the structure factor ) using neutron scattering or x-ray scattering data. The technique can be used at very short length scales (down to the atomic level) but involves significant space and time averaging (over the sample size and the acquisition time, respectively). In this way, the radial distribution function has been determined for a wide variety of systems, ranging from liquid metals to charged colloids. Going from the experimental to is not straightforward and the analysis can be quite involved.
It is also possible to calculate directly by extracting particle positions from traditional or confocal microscopy. This technique is limited to particles large enough for optical detection (in the micrometer range), but it has the advantage of being time-resolved so that, aside from the statical information, it also gives access to dynamical parameters (e.g. diffusion constants) and also space-resolved (to the level of the individual particle), allowing it to reveal the morphology and dynamics of local structures in colloidal crystals, glasses, gels, and hydrodynamic interactions.
Direct visualization of a full (distance-dependent and angle-dependent) pair correlation function was achieved by a scanning tunneling microscopy in the case of 2D molecular gases.
Higher-order correlation functions
It has been noted that radial distribution functions alone are insufficient to characterize structural information. Distinct point processes may possess identical or practically indistinguishable radial distribution functions, known as the degeneracy problem. In such cases, higher order correlation functions are needed to further describe the structure.
Higher-order distribution functions with were less studied, since they are generally less important for the thermodynamics of the system; at the same time, they are not accessible by conventional scattering techniques. They can however be measured by coherent X-ray scattering and are interesting insofar as they can reveal local symmetries in disordered systems.
See also
Ornstein–Zernike equation
Structure Factor
References
Widom, B. (2002). Statistical Mechanics: A Concise Introduction for Chemists. Cambridge University Press.
McQuarrie, D. A. (1976). Statistical Mechanics. Harper Collins Publishers.
Statistical mechanics
Mechanics
Physical chemistry | Radial distribution function | [
"Physics",
"Chemistry",
"Engineering"
] | 2,457 | [
"Applied and interdisciplinary physics",
"Mechanics",
"nan",
"Mechanical engineering",
"Statistical mechanics",
"Physical chemistry"
] |
4,539,079 | https://en.wikipedia.org/wiki/Underactuation | Underactuation is a technical term used in robotics and control theory to describe mechanical systems that cannot be commanded to follow arbitrary trajectories in configuration space. This condition can occur for a number of reasons, the simplest of which is when the system has a lower number of actuators than degrees of freedom. In this case, the system is said to be trivially underactuated.
The class of underactuated mechanical systems is very rich and includes such diverse members as automobiles, airplanes, and even animals.
Definition
To understand the mathematical conditions which lead to underactuation, one must examine the dynamics that govern the systems in question. Newton's laws of motion dictate that the dynamics of mechanical systems are inherently second order. In general, these dynamics can be described by a second order differential equation:
Where:
is the position state vector is the vector of control inputs is time.
Furthermore, in many cases the dynamics for these systems can be rewritten to be affine in the control inputs:
When expressed in this form, the system is said to be underactuated if:
When this condition is met, there are acceleration directions that can not be produced no matter what the control vector is.
Note that does not explicitly represent the number of actuators present in the system. Indeed, there may be more actuators than degrees of freedom and the system may still be underactuated. Also worth noting is the dependence of on the state . That is, there may exist states in which an otherwise fully actuated system becomes underactuated.
Examples
The classic inverted pendulum is an example of a trivially underactuated system: it has two degrees of freedom (one for its support's motion in the horizontal plane, and one for the angular motion of the pendulum), but only one of them (the cart position) is actuated, and the other is only indirectly controlled. Although naturally extremely unstable, this underactuated system is still controllable.
A standard automobile is underactuated due to the nonholonomic constraints imposed by the wheels. That is, a car cannot accelerate in a direction perpendicular to the direction the wheels are facing. A similar argument can be made for boats, planes and most other vehicles.
See also
Passive dynamics
References
Further reading
M. Saliba, and C.W. de Silva, "An Innovative Robotic Gripper for Grasping and Handling Research," IEEE Journal of Robotics and Automation, pp. 975–979, 1991.
N. Dechev, W.L. Cleghorn, and S. Naumann, “Multiple Finger, Passive Adaptive Grasp Prosthetic Hand,” Journal of Mechanism and Machine Theory, Vol. 36, No. 10, pp. 1157–1173, 2001.
External links
Canudas-de-Wit, C. On the concept of virtual constraints as a tool for walking robot control and balancing Annual Reviews in Control, 28 (2004), pp. 157–166. (Elsevier)
Nonlinear Systems College of Mechanical and Nuclear Engineering, Kansas State University
Robot control
Control theory | Underactuation | [
"Mathematics",
"Engineering"
] | 627 | [
"Robotics engineering",
"Applied mathematics",
"Control theory",
"Robot control",
"Dynamical systems"
] |
4,539,992 | https://en.wikipedia.org/wiki/Pim%20weight | Pim weights were polished stones about 15 mm (5/8 inch) diameter, equal to about two-thirds of a Hebrew shekel. Many specimens have been found since their initial discovery early in the 20th century, and each one weighs about 7.6 grams, compared to 11.5 grams of a shekel. Its name comes from the inscription seen across the top of its dome shape: the Phoenician letters 𐤐𐤉𐤌 (Hebrew , transliterated pym).
Impact
Prior to the discovery of the weights by archaeologists, scholars did not know how to translate the word (pîm) in 1 Samuel 13:21. Robert Alexander Stewart Macalister's excavations at Gezer (1902-1905 and 1907-1909) were published in 1912 with an illustration showing one such weight, which Macalister compared to another published in 1907 by Charles Simon Clermont-Ganneau.
Here is the 1611 translation of the King James Version of the Bible:
Yet they had a file for the mattocks, and for the coulters, and for the forks, and for the axes, and to sharpen the goads.
The 1982 New King James Version rendered it:
And the charge for a sharpening was a pim for the plowshares, the mattocks, the forks, and the axes, and to set the points of the goads.
Photos
See also
Ancient Hebrew units of measurement
Biblical archaeology
List of artifacts significant to the Bible
References
Sources
1907 archaeological discoveries
Archaeological artefact types
Mass
Gezer
Archaeological discoveries in Israel
Shekel | Pim weight | [
"Physics",
"Mathematics"
] | 313 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Wikipedia categories named after physical quantities",
"Matter"
] |
1,157,333 | https://en.wikipedia.org/wiki/Evolutionary%20medicine | Evolutionary medicine or Darwinian medicine is the application of modern evolutionary theory to understanding health and disease. Modern biomedical research and practice have focused on the molecular and physiological mechanisms underlying health and disease, while evolutionary medicine focuses on the question of why evolution has shaped these mechanisms in ways that may leave us susceptible to disease. The evolutionary approach has driven important advances in the understanding of cancer, autoimmune disease, and anatomy. Medical schools have been slower to integrate evolutionary approaches because of limitations on what can be added to existing medical curricula. The International Society for Evolution, Medicine and Public Health coordinates efforts to develop the field. It owns the Oxford University Press journal Evolution, Medicine and Public Health and The Evolution and Medicine Review.
Core principles
Utilizing the Delphi method, 56 experts from a variety of disciplines, including anthropology, medicine, nursing, and biology agreed upon 14 core principles intrinsic to the education and practice of evolutionary medicine. These 14 principles can be further grouped into five general categories: question framing, evolution I and II (with II involving a higher level of complexity), evolutionary trade-offs, reasons for vulnerability, and culture. Additional information regarding these principles may be found in the table below.
Human adaptations
Adaptation works within constraints, makes compromises and trade-offs, and occurs in the context of different forms of competition.
Constraints
Adaptations can only occur if they are evolvable. Some adaptations which would prevent ill health are therefore not possible.
DNA cannot be totally prevented from undergoing somatic replication corruption; this has meant that cancer, which is caused by somatic mutations, has not (so far) been eliminated by natural selection.
Humans cannot biosynthesize vitamin C, and so risk scurvy, vitamin C deficiency disease, if dietary intake of the vitamin is insufficient.
Retinal neurons and their axon output have evolved to be inside the layer of retinal pigment cells. This creates a constraint on the evolution of the visual system such that the optic nerve is forced to exit the retina through a point called the optic disc. This, in turn, creates a blind spot. More importantly, it makes vision vulnerable to increased pressure within the eye (glaucoma) since this cups and damages the optic nerve at this point, resulting in impaired vision.
Other constraints occur as the byproduct of adaptive innovations.
Trade-offs and conflicts
One constraint upon selection is that different adaptations can conflict, which requires a compromise between them to ensure an optimal cost-benefit tradeoff.
Running efficiency in women, and birth canal size
Encephalization, and gut size
Skin pigmentation protection from UV, and the skin synthesis of vitamin D
Speech and its use of a descended larynx, and increased risk of choking
Competition effects
Different forms of competition exist and these can shape the processes of genetic change.
mate choice and disease susceptibility
genomic conflict between mother and fetus that results in pre-eclampsia
Lifestyle
Humans evolved to live as simple hunter-gatherers in small tribal bands, while contemporary humans have a more complex life. This change may make present-day humans susceptible to lifestyle diseases.
Diet
In contrast to the diet of early hunter-gatherers, the modern Western diet often contains high quantities of fat, salt, and simple carbohydrates, such as refined sugars and flours.
Trans fat health risks
Dental caries
High GI foods
Modern diet based on "common wisdom" regarding diets in the paleolithic era
Among different countries, the incidence of colon cancer varies widely, and the extent of exposure to a Western pattern diet may be a factor in cancer incidence.
Life expectancy
Examples of aging-associated diseases are atherosclerosis and cardiovascular disease, cancer, arthritis, cataracts, osteoporosis, type 2 diabetes, hypertension and Alzheimer's disease. The incidence of all of these diseases increases rapidly with aging (increases exponentially with age, in the case of cancer).
Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is much higher, reaching 90%.
Exercise
Many contemporary humans engage in little physical exercise compared to the physically active lifestyles of ancestral hunter-gatherers. Prolonged periods of inactivity may have only occurred in early humans following illness or injury, so a modern sedentary lifestyle may continuously cue the body to trigger life preserving metabolic and stress-related responses such as inflammation, and some theorize that this causes chronic diseases.
Cleanliness
Contemporary humans in developed countries are mostly free of parasites, particularly intestinal ones. This is largely due to frequent washing of clothing and the body, and improved sanitation. Although such hygiene can be very important when it comes to maintaining good health, it can be problematic for the proper development of the immune system. The hygiene hypothesis is that humans evolved to be dependent on certain microorganisms that help establish the immune system, and modern hygiene practices can prevent necessary exposure to these microorganisms. "Microorganisms and macroorganisms such as helminths from mud, animals, and feces play a critical role in driving immunoregulation" (Rook, 2012). Essential microorganisms play a crucial role in building and training immune functions that fight off and repel some diseases, and protect against excessive inflammation, which has been implicated in several diseases. For instance, recent studies have found evidence supporting inflammation as a contributing factor in Alzheimer's Disease.
Specific explanations
This is a partial list: all links here go to a section describing or debating its evolutionary origin.
Life stage related
Adipose tissue in human infants
Arthritis and other chronic inflammatory diseases
Ageing
Alzheimer disease
Childhood
Menarche
Menopause
Menstruation
Morning sickness
Other
Atherosclerosis
Arthritis and other chronic inflammatory diseases
Cough]
Cystic fibrosis
Dental occlusion
Diabetes Type II
Diarrhea
Essential hypertension
Fever
Gestational hypertension
Gout
Iron deficiency (paradoxical benefits)
Obesity
Phenylketonuria
Placebos
Osteoporosis
Red blood cell polymorphism disorders
Sickle cell anemia
Sickness behavior
Women's reproductive cancers
Evolutionary psychology
As noted in the table below, adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies with evolutionary perspectives on medicine and physiological dysfunctions (see in particular, Randy Nesse and George C. Williams' book Why We Get Sick).
Evolutionary psychiatrists and psychologists suggest that some mental disorders likely have multiple causes.
See several topic areas, and the associated references, below.
Agoraphobia
Anxiety
Depression
Drug abuse
Schizophrenia
Unhappiness
History
Charles Darwin did not discuss the implications of his work for medicine, though biologists quickly appreciated the germ theory of disease and its implications for understanding the evolution of pathogens, as well as an organism's need to defend against them.
Medicine, in turn, ignored evolution, and instead focused (as done in the hard sciences) upon proximate mechanical causes.
George C. Williams was the first to apply evolutionary theory to health in the context of senescence. Also in the 1950s, John Bowlby approached the problem of disturbed child development from an evolutionary perspective upon attachment.
An important theoretical development was Nikolaas Tinbergen's distinction made originally in ethology between evolutionary and proximate mechanisms.
Randolph M. Nesse summarizes its relevance to medicine:
The paper of Paul Ewald in 1980, "Evolutionary Biology and the Treatment of Signs and Symptoms of Infectious Disease", and that of Williams and Nesse in 1991, "The Dawn of Darwinian Medicine" were key developments. The latter paper "draw a favorable reception",page x and led to a book, Why We Get Sick (published as Evolution and healing in the UK). In 2008, an online journal started: Evolution and Medicine Review.
In 2000, Paul Sherman hypothesised that morning sickness could be an adaptation that protects the developing fetus from foodborne illnesses, some of which can cause miscarriage or birth defects, such as listeriosis and toxoplasmosis.
See also
Evolutionary therapy
Evolutionary psychiatry
Evolutionary physiology
Evolutionary psychology
Evolutionary developmental psychopathology
Evolutionary approaches to depression
Illness
Paleolithic lifestyle
Universal Darwinism
References
Further reading
Books
Online articles
External links
Evolution and Medicine Network
Special Issue of Evolutionary Applications on Evolutionary Medicine
Evolutionary biology
Clinical medicine | Evolutionary medicine | [
"Biology"
] | 1,706 | [
"Evolutionary biology"
] |
1,157,354 | https://en.wikipedia.org/wiki/Mixed-signal%20integrated%20circuit | A mixed-signal integrated circuit is any integrated circuit that has both analog circuits and digital circuits on a single semiconductor die. Their usage has grown dramatically with the increased use of cell phones, telecommunications, portable electronics, and automobiles with electronics and digital sensors.
Overview
Integrated circuits (ICs) are generally classified as digital (e.g. a microprocessor) or analog (e.g. an operational amplifier). Mixed-signal ICs contain both digital and analog circuitry on the same chip, and sometimes embedded software. Mixed-signal ICs process both analog and digital signals together. For example, an analog-to-digital converter (ADC) is a typical mixed-signal circuit.
Mixed-signal ICs are often used to convert analog signals to digital signals so that digital devices can process them. For example, mixed-signal ICs are essential components for FM tuners in digital products such as media players, which have digital amplifiers. Any analog signal can be digitized using a very basic ADC, and the smallest and most energy efficient of these are mixed-signal ICs.
Mixed-signal ICs are more difficult to design and manufacture than analog-only or digital-only integrated circuits. For example, an efficient mixed-signal IC may have its digital and analog components share a common power supply. However, analog and digital components have very different power needs and consumption characteristics, which makes this a non-trivial goal in chip design.
Mixed-signal functionality involves both traditional active elements (like transistors) and well-performing passive elements (like coils, capacitors, and resistors) on the same chip. This requires additional modelling understanding and options from manufacturing technologies. High voltage transistors might be needed in the power management functions on a chip with digital functionality, possibly with a low-power CMOS processor system. Some advanced mixed-signal technologies may enable combining analog sensor elements (like pressure sensors or imaging diodes) on the same chip with an ADC.
Typically, mixed-signal ICs do not necessarily need the fastest digital performance. Instead, they need more mature models of active and passive elements for more accurate simulations and verification, such as for testability planning and reliability estimations. Therefore, mixed-signal circuits are typically realized with larger line widths than the highest speed and densest digital logic, and the implementation technologies can be two to four generations behind the latest digital-only implementation technologies. Additionally, mixed signal processing may need passive elements like resistors, capacitors, and coils, which may require specialized metal, dielectric layers, or similar adaptations of standard fabrication processes. Because of these specific requirements, mixed-signal ICs and digital ICs can have different manufacturers (known as foundries).
Applications
There are numerous applications of mixed-signal integrated circuits, such as in mobile phones, modern radio and telecommunication systems, sensor systems with on-chip standardized digital interfaces (including I2C, UART, SPI, or CAN), voice-related signal processing, aerospace and space electronics, the Internet of things (IoT), unmanned aerial vehicles (UAVs), and automotive and other electrical vehicles. Mixed-signal circuits or systems are typically cost-effective solutions, such as for building modern consumer electronics and in industrial, medical, measurement, and space applications.
Examples of mixed-signal integrated circuits include data converters using delta-sigma modulation, analog-to-digital converters and digital-to-analog converters using error detection and correction, and digital radio chips. Digitally controlled sound chips are also mixed-signal circuits. With the advent of cellular and network technology, this category now includes cellular telephone, software radio, and LAN and WAN router integrated circuits.
Design and development
Typically, mixed-signal chips perform some whole function or sub-function in a larger assembly, such as the radio subsystem of a cell phone, or the read data path and laser SLED control logic of a DVD player. Mixed-signal ICs often contain an entire system-on-a-chip. They may also contain on-chip memory blocks (like OTP), which complicates the manufacturing compared to analog ICs. A mixed-signal IC minimizes off-chip interconnects between digital and analog functionality in the system—typically reducing size and weight due to minimized packaging and a smaller module substrate—and therefore increases the reliability of the system.
Because of the use of both digital signal processing and analog circuitry, mixed-signal ICs are usually designed for a very specific purpose. Their design requires a high level of expertise and careful use of computer aided design (CAD) tools. There also exists specific design tools (like mixed-signal simulators) or description languages (like VHDL-AMS). Automated testing of the finished chips can also be challenging. Teradyne, Keysight, and Advantest are the major suppliers of the test equipment for mixed-signal chips.
There are several particular challenges of mixed-signal circuit manufacturing:
CMOS technology is usually optimal for digital performance, while bipolar junction transistors are usually optimal for analog performance. However, until the last decade, it was difficult to combine these cost-effectively or to design both in a single technology without serious performance compromises. The advent of technologies like high performance CMOS, BiCMOS, CMOS SOI, and SiGe have removed many of these former compromises.
Testing functional operation of mixed-signal ICs remains complex, expensive, and often is a "one-off" implementation task (meaning a lot of work is necessary for a product with a single, specific use).
Systematic design methods of analog and mixed-signal circuits are far more primitive than digital circuits. In general, analog circuit design cannot be automated to nearly the extent that digital circuit design can. Combining the two technologies multiplies this complication.
Fast-changing digital signals send noise to sensitive analog inputs. One path for this noise is substrate coupling. A variety of techniques are used to attempt to block or cancel this noise coupling, such as fully differential amplifiers, P+ guard-rings, differential topology, on-chip decoupling, and triple-well isolation.
Variations
Mixed-signal devices are available as standard parts, but sometimes custom-designed application-specific integrated circuits (ASICs) are necessary. ASICs are designed for new applications, when new standards emerge, or when new energy source(s) are implemented in the system. Due to their specialization, ASICs are usually only developed when production volumes are estimated to be high. The availability of ready-and-tested analog- and mixed-signal IP blocks from foundries or dedicated design houses has lowered the gap to realize mixed-signal ASICs.
There also exist mixed-signal field-programmable gate arrays (FPGAs) and microcontrollers. In these, the same chip that handles digital logic may contain mixed-signal structures like analog-to-digital and digital-to-analog converter(s), operational amplifiers, or wireless connectivity blocks. These mixed-signal FPGAs and microcontrollers are bridging the gap between standard mixed-signal devices, full-custom ASICs, and embedded software; they offer a solution during product development or when product volume is too low to justify an ASIC. However, they can have performance limitations, such as the resolution of the analog-to-digital converters, the speed of digital-to-analog conversion, or a limited number of inputs and outputs. Nevertheless, they can speed up the system architecture design, prototyping, and even production (at small and medium scales). Their usage also can be supported with development boards, development community, and possibly software support.
History
MOS switched-capacitor circuits
The MOSFET was invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered and used surface passivation by silicon dioxide to create the first planar transistors, the first in which drain and source were adjacent at the same surface. Robert Noyce and Jack Kilby invention of the silicon integrated cicruit was enabled by the planar process developed by Jean Hoerni. In turn, Hoerni's planar process was inspired by the surface passivation method developed at Bell Labs by Carl Frosch and Lincoln Derick in 1955 and 1957.
MOS technology eventually became practical for telephony applications with the MOS mixed-signal integrated circuit, which combines analog and digital signal processing on a single chip, developed by former Bell engineer David A. Hodges with Paul R. Gray at UC Berkeley in the early 1970s. In 1974, Hodges and Gray worked with R.E. Suarez to develop MOS switched capacitor (SC) circuit technology, which they used to develop a digital-to-analog converter (DAC) chip, using MOS capacitors and MOSFET switches for data conversion. MOS analog-to-digital converter (ADC) and DAC chips were commercialized by 1974.
MOS SC circuits led to the development of pulse-code modulation (PCM) codec-filter chips in the late 1970s. The silicon-gate CMOS (complementary MOS) PCM codec-filter chip, developed by Hodges and W.C. Black in 1980, has since been the industry standard for digital telephony. By the 1990s, telecommunication networks such as the public switched telephone network (PSTN) had been largely digitized with very-large-scale integration (VLSI) CMOS PCM codec-filters, widely used in electronic switching systems for telephone exchanges, private branch exchanges (PBX), and key telephone systems (KTS); user-end modems; data transmission applications such as digital loop carriers, pair gain multiplexers, telephone loop extenders, integrated services digital network (ISDN) terminals, digital cordless telephones, and digital cell phones; and applications such as speech recognition equipment, voice data storage, voice mail, and digital tapeless answering machines. The bandwidth of digital telecommunication networks has been rapidly increasing at an exponential rate, as observed by Edholm's law, largely driven by the rapid scaling and miniaturization of MOS technology.
RF CMOS circuits
While working at Bell Labs in the early 1980s, Pakistani engineer Asad Abidi worked on the development of sub-micron MOSFET (metal–oxide–semiconductor field-effect transistor) VLSI (very large-scale integration) technology at the Advanced LSI Development Lab, along with Marty Lepselter, George E. Smith, and Harry Bol. As one of the few circuit designers at the lab, Abidi demonstrated the potential of sub-micron NMOS integrated circuit technology in high-speed communication circuits, and developed the first MOS amplifiers for Gb/s data rates in optical fiber receivers. Abidi's work was initially met with skepticism from proponents of gallium arsenide and bipolar junction transistors, the dominant technologies for high-speed circuits at the time. In 1985, he joined UCLA, where he pioneered RF CMOS technology in the late 1980s. His work changed the way in which radio-frequency (RF) circuits would be designed, away from discrete bipolar transistors and towards CMOS integrated circuits.
Abidi was researching analog CMOS circuits for signal processing and communications during the late 1980s to early 1990s. In the mid-1990s, the RF CMOS technology that he pioneered was widely adopted in wireless networking, as mobile phones began entering widespread use. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices.
The baseband processors and radio transceivers in all modern wireless networking devices and mobile phones are mass-produced using RF CMOS devices. RF CMOS circuits are widely used to transmit and receive wireless signals in a variety of applications, such as satellite technology (such as GPS), Bluetooth, Wi-Fi, near-field communication (NFC), mobile networks (such as 3G, 4G, and 5G), terrestrial broadcast, and automotive radar applications, among other uses. RF CMOS technology is crucial to modern wireless communications, including wireless networks and mobile communication devices.
Commercial examples
Examples of mixed-signal design houses and resources:
AnSem
CoreHW
EnSilica
ICsense
Presto Engineering
Sondrel
System to ASIC
Triad Semiconductor
Examples of mixed signal FPGAs and microcontrollers:
Analog Devices CM4xx Mixed-Signal Control Processors
Fusion FPGA (from Microsemi, now part of Microchip Technology)
Cypress PSoC – "programmable system on chip", a product from Infineon Technologies (former Cypress Semiconductor)
Texas Instruments' MSP430
Xilinx mixed signal FPGA
Examples of mixed signal foundries:
GlobalFoundries
New Japan Radio
Tower Semiconductor Ltd
X-Fab
List of sound chips
Yamaha FM synthesis sound chips
POKEY
MOS Technology SID
See also
Analog front-end
RFIC
Notes
References
Further reading
http://CMOSedu.com/
Electronic design | Mixed-signal integrated circuit | [
"Engineering"
] | 2,703 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
1,157,422 | https://en.wikipedia.org/wiki/Virial%20expansion | The virial expansion is a model of thermodynamic equations of state. It expresses the pressure of a gas in local equilibrium as a power series of the density. This equation may be represented in terms of the compressibility factor, , as
This equation was first proposed by Kamerlingh Onnes. The terms , , and represent the virial coefficients. The leading coefficient is defined as the constant value of 1, which ensures that the equation reduces to the ideal gas expression as the gas density approaches zero.
Second and third virial coefficients
The second, , and third, , virial coefficients have been studied extensively and tabulated for many fluids for more than a century. Two of the most extensive compilations are in the books by Dymond and the National Institute of Standards and Technology's Thermo Data Engine Database and its Web Thermo Tables. Tables of second and third virial coefficients of many fluids are included in these compilations.
Casting equations of the state into virial form
Most equations of state can be reformulated and cast in virial equations to evaluate and compare their implicit second and third virial coefficients. The seminal van der Waals equation of state was proposed in 1873:
where is molar volume. It can be rearranged by expanding into a Taylor series:
In the van der Waals equation, the second virial coefficient has roughly the correct behavior, as it decreases monotonically when the temperature is lowered. The third and higher virial coefficients are independent of temperature, and are not correct, especially at low temperatures.
Almost all subsequent equations of state are derived from the van der Waals equation, like those from Dieterici, Berthelot, Redlich-Kwong, and Peng-Robinson suffer from the singularity introduced by .
Other equations of state, started by Beattie and Bridgeman, are more closely related to virial equations, and show to be more accurate in representing behavior of fluids in both gaseous and liquid phases. The Beattie-Bridgeman equation of state, proposed in 1928,
where
can be rearranged as
The Benedict-Webb-Rubin equation of state of 1940 represents better isotherms below the critical temperature:
More improvements were achieved by Starling in 1972:
Following are plots of reduced second and third virial coefficients against reduced temperature according to Starling:
The exponential terms in the last two equations correct the third virial coefficient so that the isotherms in the liquid phase can be represented correctly. The exponential term converges rapidly as ρ increases, and if only the first two terms in its Taylor expansion series are taken, , and multiplied with , the result is , which contributes a term to the third virial coefficient, and one term to the eighth virial coefficient, which can be ignored.
After the expansion of the exponential terms, the Benedict-Webb-Rubin and Starling equations of state have this form:
Cubic virial equation of state
The three-term virial equation or a cubic virial equation of state
has the simplicity of the Van der Waals equation of state without its singularity at . Theoretically, the second virial coefficient represents bimolecular attraction forces, and the third virial term represents the repulsive forces among three molecules in close contact.
With this cubic virial equation, the coefficients B and C can be solved in closed form. Imposing the critical conditions:
the cubic virial equation can be solved to yield:
and
is therefore 0.333, compared to 0.375 from the Van der Waals equation.
Between the critical point and the triple point is the saturation region of fluids. In this region, the gaseous phase coexists with the liquid phase under saturation pressure , and the saturation temperature . Under the saturation pressure, the liquid phase has a molar volume of , and the gaseous phase has a molar volume of . The corresponding molar densities are and . These are the saturation properties needed to compute second and third virial coefficients.
A valid equation of state must produce an isotherm which crosses the horizontal line of at and , on . Under and , gas is in equilibrium with liquid. This means that the PρT isotherm has three roots at . The cubic virial equation of state at is:
It can be rearranged as:
The factor is the volume of saturated gas according to the ideal gas law, and can be given a unique name :
In the saturation region, the cubic equation has three roots, and can be written alternatively as:
which can be expanded as:
is a volume of an unstable state between and . The cubic equations are identical. Therefore, from the linear terms in these equations, can be solved:
From the quadratic terms, B can be solved:
And from the cubic terms, C can be solved:
Since , and have been tabulated for many fluids with as a parameter, B and C can be computed in the saturation region of these fluids. The results are generally in agreement with those computed from Benedict-Webb-Rubin and Starling equations of state.
See also
Virial theorem
Statistical mechanics
Equation of state
References
Statistical mechanics | Virial expansion | [
"Physics"
] | 1,039 | [
"Statistical mechanics"
] |
1,157,585 | https://en.wikipedia.org/wiki/Washout%20%28erosion%29 | A washout is the sudden erosion of soft soil or other support surfaces by a gush of water, usually occurring during a heavy downpour of rain (a flash flood) or other stream flooding. These downpours may occur locally in a thunderstorm or over a large area, such as following the landfall of a tropical cyclone. If a washout occurs in a crater-like formation, it is called a sinkhole, and it usually involves a leaking or broken water main or sewerage pipes. Other types of sinkholes, such as collapsed caves, are not washouts.
Widespread washouts can occur in mountainous areas after heavy rains, even in normally dry ravines. A severe washout can become a landslide, or cause a dam break in an earthen dam. Like other forms of erosion, most washouts can be prevented by vegetation whose roots hold the soil and/or slow the flow of surface and underground water. Deforestation increases the risk of washouts. Retaining walls and culverts may be used to try to prevent washouts, although particularly severe washouts may even destroy these if they are not large or strong enough.
Effect on road and rail transport
In road and rail transport, a washout is the result of a natural disaster where the roadbed is eroded away by flowing water, usually as the result of a flood. When a washout destroys a railroad's right-of-way, the track is sometimes left suspended in midair across the newly formed gap, or it dips down into a ditch. This phenomenon is discussed in more detail under the term erosion. Bridges may collapse due to bridge scour around one or more bridge abutments or piers.
In 2004, the remnants of Hurricane Frances, and then Hurricane Ivan, caused a large number of washouts in western North Carolina and other parts of the southern Appalachian Mountains, closing some roads for days and parts of the Blue Ridge Parkway for months. Other washouts have also caused train wrecks where tracks have been unknowingly undermined. Motorists have also driven into flooded streams at night, unaware of a new washout on the road in front of them until it is too late to brake, sometimes prompting a high-water rescue.
Major washouts can also ruin pipelines or undermine utility poles or underground lines, interrupting public utilities.
See also
Bridge scour
Flash flood
Washaway
References
External links
Flood
Hydrology
Water
Weather hazards
Road hazards | Washout (erosion) | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 492 | [
"Physical phenomena",
"Hydrology",
"Weather hazards",
"Weather",
"Road hazards",
"Flood",
"Environmental engineering",
"Water"
] |
1,157,819 | https://en.wikipedia.org/wiki/Virial%20coefficient | Virial coefficients appear as coefficients in the virial expansion of the pressure of a many-particle system in powers of the density, providing systematic corrections to the ideal gas law. They are characteristic of the interaction potential between the particles and in general depend on the temperature. The second virial coefficient depends only on the pair interaction between the particles, the third () depends on 2- and non-additive 3-body interactions, and so on.
Derivation
The first step in obtaining a closed expression for virial coefficients is a cluster expansion of the grand canonical partition function
Here is the pressure, is the volume of the vessel containing the particles, is the Boltzmann constant, is the absolute temperature, is the fugacity, with the chemical potential. The quantity is the canonical partition function of a subsystem of particles:
Here is the Hamiltonian (energy operator) of a subsystem of particles. The Hamiltonian is a sum of the kinetic energies of the particles and the total -particle potential energy (interaction energy). The latter includes pair interactions and possibly 3-body and higher-body interactions. The grand partition function can be expanded in a sum of contributions from one-body, two-body, etc. clusters. The virial expansion is obtained from this expansion by observing that equals . In this manner one derives
.
These are quantum-statistical expressions containing kinetic energies. Note that the one-particle partition function contains only a kinetic energy term. In the classical limit the kinetic energy operators commute with the potential operators and the kinetic energies in numerator and denominator cancel mutually. The trace (tr) becomes an integral over the configuration space. It follows that classical virial coefficients depend on the interactions between the particles only and are given as integrals over the particle coordinates.
The derivation of higher than virial coefficients becomes quickly a complex combinatorial problem. Making the classical approximation and
neglecting non-additive interactions (if present), the combinatorics can be handled graphically as first shown by Joseph E. Mayer and Maria Goeppert-Mayer.
They introduced what is now known as the Mayer function:
and wrote the cluster expansion in terms of these functions. Here is the interaction potential between particle 1 and 2 (which are assumed to be identical particles).
Definition in terms of graphs
The virial coefficients are related to the irreducible Mayer cluster integrals through
The latter are concisely defined in terms of graphs.
The rule for turning these graphs into integrals is as follows:
Take a graph and label its white vertex by and the remaining black vertices with .
Associate a labelled coordinate k to each of the vertices, representing the continuous degrees of freedom associated with that particle. The coordinate 0 is reserved for the white vertex
With each bond linking two vertices associate the Mayer f-function corresponding to the interparticle potential
Integrate over all coordinates assigned to the black vertices
Multiply the end result with the symmetry number of the graph, defined as the inverse of the number of permutations of the black labelled vertices that leave the graph topologically invariant.
The first two cluster integrals are
{|
| || ||
|-
| || ||
|}
The expression of the second virial coefficient is thus:
where particle 2 was assumed to define the origin ().
This classical expression for the second virial coefficient was first derived by Leonard Ornstein in his 1908 Leiden University Ph.D. thesis.
See also
Boyle temperature – temperature at which the second virial coefficient vanishes
Excess property
Compressibility factor
References
Further reading
http://scitation.aip.org/content/aip/journal/jcp/50/10/10.1063/1.1670902
http://scitation.aip.org/content/aip/journal/jcp/50/11/10.1063/1.1670994
Reid, C. R., Prausnitz, J. M., Poling B. E., Properties of gases and liquids, IV edition, Mc Graw-Hill, 1987
Statistical mechanics | Virial coefficient | [
"Physics"
] | 836 | [
"Statistical mechanics"
] |
1,157,887 | https://en.wikipedia.org/wiki/Supersymmetric%20quantum%20mechanics | In theoretical physics, supersymmetric quantum mechanics is an area of research where supersymmetry are applied to the simpler setting of plain quantum mechanics, rather than quantum field theory. Supersymmetric quantum mechanics has found applications outside of high-energy physics, such as providing new methods to solve quantum mechanical problems, providing useful extensions to the WKB approximation, and statistical mechanics.
Introduction
Understanding the consequences of supersymmetry (SUSY) has proven mathematically daunting, and it has likewise been difficult to develop theories that could account for symmetry breaking, i.e., the lack of observed partner particles of equal mass. To make progress on these problems, physicists developed supersymmetric quantum mechanics, an application of the supersymmetry superalgebra to quantum mechanics as opposed to quantum field theory. It was hoped that studying SUSY's consequences in this simpler setting would lead to new understanding; remarkably, the effort created new areas of research in quantum mechanics itself.
For example, students are typically taught to "solve" the hydrogen atom by a process that begins by inserting the Coulomb potential into the Schrödinger equation. Following use of multiple differential equations, the analysis produces a recursion relation for the Laguerre polynomials. The outcome is the spectrum of hydrogen-atom energy states (labeled by quantum numbers n and l). Using ideas drawn from SUSY, the final result can be derived with greater ease, in much the same way that operator methods are used to solve the harmonic oscillator. A similar supersymmetric approach can also be used to more accurately find the hydrogen spectrum using the Dirac equation. Oddly enough, this approach is analogous to the way Erwin Schrödinger first solved the hydrogen atom. He did not call his solution supersymmetric, as SUSY was thirty years in the future.
The SUSY solution of the hydrogen atom is only one example of the very general class of solutions which SUSY provides to shape-invariant potentials, a category which includes most potentials taught in introductory quantum mechanics courses.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then called partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy—but, in the relativistic world, energy and mass are interchangeable, so we can just as easily say that the partner particles have equal mass.
SUSY concepts have provided useful extensions to the WKB approximation in the form of a modified version of the Bohr-Sommerfeld quantization condition. In addition, SUSY has been applied to non-quantum statistical mechanics through the Fokker–Planck equation, showing that even if the original inspiration in high-energy particle physics turns out to be a blind alley, its investigation has brought about many useful benefits.
Example: the harmonic oscillator
The Schrödinger equation for the harmonic oscillator takes the form
where is the th energy eigenstate of with energy . We want to find an expression for in terms of . We define the operators
and
where , which we need to choose, is called the superpotential of . We also define the aforementioned partner Hamiltonians and as
A zero energy ground state of would satisfy the equation
Assuming that we know the ground state of the harmonic oscillator , we can solve for as
We then find that
We can now see that
This is a special case of shape invariance, discussed below. Taking without proof the introductory theorem mentioned above, it is apparent that the spectrum of will start with and continue upwards in steps of The spectra of and will have the same even spacing, but will be shifted up by amounts and , respectively. It follows that the spectrum of is therefore the familiar .
SUSY QM superalgebra
In fundamental quantum mechanics, we learn that an algebra of operators is defined by commutation relations among those operators. For example, the canonical operators of position and momentum have the commutator . (Here, we use "natural units" where the Planck constant is set equal to 1.) A more intricate case is the algebra of angular momentum operators; these quantities are closely connected to the rotational symmetries of three-dimensional space. To generalize this concept, we define an anticommutator, which relates operators the same way as an ordinary commutator, but with the opposite sign:
If operators are related by anticommutators as well as commutators, we say they are part of a Lie superalgebra. Let's say we have a quantum system described by a Hamiltonian and a set of operators . We shall call this system supersymmetric if the following anticommutation relation is valid for all :
If this is the case, then we call the system's supercharges.
Example
Let's look at the example of a one-dimensional nonrelativistic particle with a 2D (i.e., two states) internal degree of freedom called "spin" (it's not really spin because "real" spin is a property of 3D particles). Let be an operator which transforms a "spin up" particle into a "spin down" particle. Its adjoint then transforms a spin down particle into a spin up particle; the operators are normalized such that the anticommutator . And . Let be the momentum of the particle and be its position with . Let (the "superpotential") be an arbitrary complex analytic function of and define the supersymmetric operators
Note that and are self-adjoint. Let the Hamiltonian
where W′ is the derivative of W. Also note that = 0. This is nothing other than N = 2 supersymmetry. Note that acts like an electromagnetic vector potential.
Let's also call the spin down state "bosonic" and the spin up state "fermionic". This is only in analogy to quantum field theory and should not be taken literally. Then, Q1 and Q2 maps "bosonic" states into "fermionic" states and vice versa.
Reformulating this a bit:
Define
and,
and
An operator is "bosonic" if it maps "bosonic" states to "bosonic" states and "fermionic" states to "fermionic" states. An operator is "fermionic" if it maps "bosonic" states to "fermionic" states and vice versa. Any operator can be expressed uniquely as the sum of a bosonic operator and a fermionic operator. Define the supercommutator [,} as follows: Between two bosonic operators or a bosonic and a fermionic operator, it is none other than the commutator but between two fermionic operators, it is an anticommutator.
Then, x and p are bosonic operators and b, , Q and are fermionic operators.
Let's work in the Heisenberg picture where x, b and are functions of time.
Then,
This is nonlinear in general: i.e., x(t), b(t) and do not form a linear SUSY representation because isn't necessarily linear in x. To avoid this problem, define the self-adjoint operator . Then,
and we see that we have a linear SUSY representation.
Now let's introduce two "formal" quantities, ; and with the latter being the adjoint of the former such that
and both of them commute with bosonic operators but anticommute with fermionic ones.
Next, we define a construct called a superfield:
f is self-adjoint. Then,
Incidentally, there's also a U(1)R symmetry, with p and x and W having zero R-charges and having an R-charge of 1 and b having an R-charge of −1.
Shape invariance
Suppose is real for all real . Then we can simplify the expression for the Hamiltonian to
There are certain classes of superpotentials such that both the bosonic and fermionic Hamiltonians have similar forms. Specifically
where the 's are parameters. For example, the hydrogen atom potential with angular momentum can be written this way.
This corresponds to for the superpotential
This is the potential for angular momentum shifted by a constant. After solving the ground state, the supersymmetric operators can be used to construct the rest of the bound state spectrum.
In general, since and are partner potentials, they share the same energy spectrum except the one extra ground energy. We can continue this process of finding partner potentials with the shape invariance condition, giving the following formula for the energy levels in terms of the parameters of the potential
where are the parameters for the multiple partnered potentials.
See also
Supersymmetry algebra
Superalgebra
Supersymmetric gauge theory
References
Further reading
F. Cooper, A. Khare and U. Sukhatme, "Supersymmetry and Quantum Mechanics", Phys.Rept.251:267–385, 1995.
D.S. Kulshreshtha, J.Q. Liang and H.J.W. Muller-Kirsten, "Fluctuation equations about classical field configurations and supersymmetric quantum mechanics", Annals Phys. 225:191-211, 1993.
G. Junker, "Supersymmetric Methods in Quantum and Statistical Physics", Springer-Verlag, Berlin, 1996
B. Mielnik and O. Rosas-Ortiz, "Factorization: Little or great algorithm?", J. Phys. A: Math. Gen. 37: 10007–10035, 2004
External links
References from INSPIRE-HEP
Quantum mechanics
Supersymmetry | Supersymmetric quantum mechanics | [
"Physics"
] | 2,168 | [
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry"
] |
1,158,125 | https://en.wikipedia.org/wiki/DNA%20sequencing | DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery.
Knowledge of DNA sequences has become indispensable for basic biological research, DNA Genographic Projects and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment. Having a quick way to sequence DNA allows for faster and more individualized medical care to be administered, and for more organisms to be identified and cataloged.
The rapid advancements in DNA sequencing technology have played a crucial role in sequencing complete genomes of various life forms, including humans, as well as numerous animal, plant, and microbial species.
The first DNA sequences were obtained in the early 1970s by academic researchers using laborious methods based on two-dimensional chromatography. Following the development of fluorescence-based sequencing methods with a DNA sequencer, DNA sequencing has become easier and orders of magnitude faster.
Applications
DNA sequencing may be used to determine the sequence of individual genes, larger genetic regions (i.e. clusters of genes or operons), full chromosomes, or entire genomes of any organism. DNA sequencing is also the most efficient way to indirectly sequence RNA or proteins (via their open reading frames). In fact, DNA sequencing has become a key technology in many areas of biology and other sciences such as medicine, forensics, and anthropology.
Molecular biology
Sequencing is used in molecular biology to study genomes and the proteins they encode. Information obtained using sequencing allows researchers to identify changes in genes and noncoding DNA (including regulatory sequences), associations with diseases and phenotypes, and identify potential drug targets.
Evolutionary biology
Since DNA is an informative macromolecule in terms of transmission from one generation to another, DNA sequencing is used in evolutionary biology to study how different organisms are related and how they evolved. In February 2021, scientists reported, for the first time, the sequencing of DNA from animal remains, a mammoth in this instance, over a million years old, the oldest DNA sequenced to date.
Metagenomics
The field of metagenomics involves identification of organisms present in a body of water, sewage, dirt, debris filtered from the air, or swab samples from organisms. Knowing which organisms are present in a particular environment is critical to research in ecology, epidemiology, microbiology, and other fields. Sequencing enables researchers to determine which types of microbes may be present in a microbiome, for example.
Virology
As most viruses are too small to be seen by a light microscope, sequencing is one of the main tools in virology to identify and study the virus. Viral genomes can be based in DNA or RNA. RNA viruses are more time-sensitive for genome sequencing, as they degrade faster in clinical samples. Traditional Sanger sequencing and next-generation sequencing are used to sequence viruses in basic and clinical research, as well as for the diagnosis of emerging viral infections, molecular epidemiology of viral pathogens, and drug-resistance testing. There are more than 2.3 million unique viral sequences in GenBank. Recently, NGS has surpassed traditional Sanger as the most popular approach for generating viral genomes.
During the 1997 avian influenza outbreak, viral sequencing determined that the influenza sub-type originated through reassortment between quail and poultry. This led to legislation in Hong Kong that prohibited selling live quail and poultry together at market. Viral sequencing can also be used to estimate when a viral outbreak began by using a molecular clock technique.
Medicine
Medical technicians may sequence genes (or, theoretically, full genomes) from patients to determine if there is risk of genetic diseases. This is a form of genetic testing, though some genetic tests may not involve DNA sequencing.
As of 2013 DNA sequencing was increasingly used to diagnose and treat rare diseases. As more and more genes are identified that cause rare genetic diseases, molecular diagnoses for patients become more mainstream. DNA sequencing allows clinicians to identify genetic diseases, improve disease management, provide reproductive counseling, and more effective therapies. Gene sequencing panels are used to identify multiple potential genetic causes of a suspected disorder.
Also, DNA sequencing may be useful for determining a specific bacteria, to allow for more precise antibiotics treatments, hereby reducing the risk of creating antimicrobial resistance in bacteria populations.
Forensic investigation
DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing. DNA testing has evolved tremendously in the last few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, etc. uniquely separate each living organism from another. Testing DNA is a technique which can detect specific genomes in a DNA strand to produce a unique and individualized pattern.
DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing, as it has evolved significantly over the past few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, and other bodily fluids uniquely separate each living organism from another, making it an invaluable tool in the field of forensic science. The process of DNA testing involves detecting specific genomes in a DNA strand to produce a unique and individualized pattern, which can be used to identify individuals or determine their relationships.
The advancements in DNA sequencing technology have made it possible to analyze and compare large amounts of genetic data quickly and accurately, allowing investigators to gather evidence and solve crimes more efficiently. This technology has been used in various applications, including forensic identification, paternity testing, and human identification in cases where traditional identification methods are unavailable or unreliable. The use of DNA sequencing has also led to the development of new forensic techniques, such as DNA phenotyping, which allows investigators to predict an individual's physical characteristics based on their genetic data.
In addition to its applications in forensic science, DNA sequencing has also been used in medical research and diagnosis. It has enabled scientists to identify genetic mutations and variations that are associated with certain diseases and disorders, allowing for more accurate diagnoses and targeted treatments. Moreover, DNA sequencing has also been used in conservation biology to study the genetic diversity of endangered species and develop strategies for their conservation.
Furthermore, the use of DNA sequencing has also raised important ethical and legal considerations. For example, there are concerns about the privacy and security of genetic data, as well as the potential for misuse or discrimination based on genetic information. As a result, there are ongoing debates about the need for regulations and guidelines to ensure the responsible use of DNA sequencing technology.
Overall, the development of DNA sequencing technology has revolutionized the field of forensic science and has far-reaching implications for our understanding of genetics, medicine, and conservation biology.
The four canonical bases
The canonical structure of DNA has four bases: thymine (T), adenine (A), cytosine (C), and guanine (G). DNA sequencing is the determination of the physical order of these bases in a molecule of DNA. However, there are many other bases that may be present in a molecule. In some viruses (specifically, bacteriophage), cytosine may be replaced by hydroxy methyl or hydroxy methyl glucose cytosine. In mammalian DNA, variant bases with methyl groups or phosphosulfate may be found. Depending on the sequencing technique, a particular modification, e.g., the 5mC (5 methyl cytosine) common in humans, may or may not be detected.
In almost all organisms, DNA is synthesized in vivo using only the 4 canonical bases; modification that occurs post replication creates other bases like 5 methyl C. However, some bacteriophage can incorporate a non standard base directly.
In addition to modifications, DNA is under constant assault by environmental agents such as UV and Oxygen radicals. At the present time, the presence of such damaged bases is not detected by most DNA sequencing methods, although PacBio has published on this.
History
Discovery of DNA structure and function
Deoxyribonucleic acid (DNA) was first discovered and isolated by Friedrich Miescher in 1869, but it remained under-studied for many decades because proteins, rather than DNA, were thought to hold the genetic blueprint to life. This situation changed after 1944 as a result of some experiments by Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrating that purified DNA could change one strain of bacteria into another. This was the first time that DNA was shown capable of transforming the properties of cells.
In 1953, James Watson and Francis Crick put forward their double-helix model of DNA, based on crystallized X-ray structures being studied by Rosalind Franklin. According to the model, DNA is composed of two strands of nucleotides coiled around each other, linked together by hydrogen bonds and running in opposite directions. Each strand is composed of four complementary nucleotides – adenine (A), cytosine (C), guanine (G) and thymine (T) – with an A on one strand always paired with T on the other, and C always paired with G. They proposed that such a structure allowed each strand to be used to reconstruct the other, an idea central to the passing on of hereditary information between generations.
The foundation for sequencing proteins was first laid by the work of Frederick Sanger who by 1955 had completed the sequence of all the amino acids in insulin, a small protein secreted by the pancreas. This provided the first conclusive evidence that proteins were chemical entities with a specific molecular pattern rather than a random mixture of material suspended in fluid. Sanger's success in sequencing insulin spurred on x-ray crystallographers, including Watson and Crick, who by now were trying to understand how DNA directed the formation of proteins within a cell. Soon after attending a series of lectures given by Frederick Sanger in October 1954, Crick began developing a theory which argued that the arrangement of nucleotides in DNA determined the sequence of amino acids in proteins, which in turn helped determine the function of a protein. He published this theory in 1958.
RNA sequencing
RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), in 1972 and 1976. Traditional RNA sequencing methods require the creation of a cDNA molecule which must be sequenced.
Traditional RNA Sequencing Methods
Traditional RNA sequencing methods involve several steps:
1) Reverse Transcription: The first step is to convert the RNA molecule into a complementary DNA (cDNA) molecule using an enzyme called reverse transcriptase.
2) cDNA Synthesis: The cDNA molecule is then synthesized through a process called PCR (Polymerase Chain Reaction), which amplifies the cDNA to produce multiple copies.
3)Sequencing: The amplified cDNA is then sequenced using a technique such as Sanger sequencing or Maxam-Gilbert sequencing.
Challenges and Limitations
Traditional RNA sequencing methods have several limitations. For example:
They require the creation of a cDNA molecule, which can be time-consuming and labor-intensive.
They are prone to errors and biases, which can affect the accuracy of the sequencing results.
They are limited in their ability to detect rare or low-abundance transcripts.
Advances in RNA Sequencing Technology
In recent years, advances in RNA sequencing technology have addressed some of these limitations. New methods such as next-generation sequencing (NGS) and single-molecule real-time (SMRT) sequencing have enabled faster, more accurate, and more cost-effective sequencing of RNA molecules. These advances have opened up new possibilities for studying gene expression, identifying new genes, and understanding the regulation of gene expression.
Early DNA sequencing methods
The first method for determining DNA sequences involved a location-specific primer extension strategy established by Ray Wu, a geneticist, at Cornell University in 1970. DNA polymerase catalysis and specific nucleotide labeling, both of which figure prominently in current sequencing schemes, were used to sequence the cohesive ends of lambda phage DNA. Between 1970 and 1973, Wu, scientist Radha Padmanabhan and colleagues demonstrated that this method can be employed to determine any DNA sequence using synthetic location-specific primers.
Walter Gilbert and Allan Maxam at Harvard also developed sequencing methods, including one for "DNA sequencing by chemical degradation". In 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. Advancements in sequencing were aided by the concurrent development of recombinant DNA technology, allowing DNA samples to be isolated from sources other than viruses.
In 1977, Frederick Sanger then adopted a primer-extension strategy to develop more rapid DNA sequencing methods at the MRC Centre, Cambridge, UK. This technique was similar to his “Plus and Minus” strategy, however, it was based upon the selective incorporation of chain-terminating dideoxynucleotides (ddNTPs) by DNA polymerase during in vitro DNA replication. Sanger published this method in the same year.
Sequencing of full genomes
The first full DNA genome to be sequenced was that of bacteriophage φX174 in 1977. Medical Research Council scientists deciphered the complete DNA sequence of the Epstein-Barr virus in 1984, finding it contained 172,282 nucleotides. Completion of the sequence marked a significant turning point in DNA sequencing because it was achieved with no prior genetic profile knowledge of the virus.
A non-radioactive method for transferring the DNA molecules of sequencing reaction mixtures onto an immobilizing matrix during electrophoresis was developed by Herbert Pohl and co-workers in the early 1980s. Followed by the commercialization of the DNA sequencer "Direct-Blotting-Electrophoresis-System GATC 1500" by GATC Biotech, which was intensively used in the framework of the EU genome-sequencing programme, the complete DNA sequence of the yeast Saccharomyces cerevisiae chromosome II. Leroy E. Hood's laboratory at the California Institute of Technology announced the first semi-automated DNA sequencing machine in 1986. This was followed by Applied Biosystems' marketing of the first fully automated sequencing machine, the ABI 370, in 1987 and by Dupont's Genesis 2000 which used a novel fluorescent labeling technique enabling all four dideoxynucleotides to be identified in a single lane. By 1990, the U.S. National Institutes of Health (NIH) had begun large-scale sequencing trials on Mycoplasma capricolum, Escherichia coli, Caenorhabditis elegans, and Saccharomyces cerevisiae at a cost of US$0.75 per base. Meanwhile, sequencing of human cDNA sequences called expressed sequence tags began in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. In 1995, Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) published the first complete genome of a free-living organism, the bacterium Haemophilus influenzae. The circular chromosome contains 1,830,137 bases and its publication in the journal Science marked the first published use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts.
By 2001, shotgun sequencing methods had been used to produce a draft sequence of the human genome.
High-throughput sequencing (HTS) methods
Several new methods for DNA sequencing were developed in the mid to late 1990s and were implemented in commercial DNA sequencers by 2000. Together these were called the "next-generation" or "second-generation" sequencing (NGS) methods, in order to distinguish them from the earlier methods, including Sanger sequencing. In contrast to the first generation of sequencing, NGS technology is typically characterized by being highly scalable, allowing the entire genome to be sequenced at once. Usually, this is accomplished by fragmenting the genome into small pieces, randomly sampling for a fragment, and sequencing it using one of a variety of technologies, such as those described below. An entire genome is possible because multiple fragments are sequenced at once (giving it the name "massively parallel" sequencing) in an automated process.
NGS technology has tremendously empowered researchers to look for insights into health, anthropologists to investigate human origins, and is catalyzing the "Personalized Medicine" movement. However, it has also opened the door to more room for error. There are many software tools to carry out the computational analysis of NGS data, often compiled at online platforms such as CSI NGS Portal, each with its own algorithm. Even the parameters within one software package can change the outcome of the analysis. In addition, the large quantities of data produced by DNA sequencing have also required development of new methods and programs for sequence analysis. Several efforts to develop standards in the NGS field have been attempted to address these challenges, most of which have been small-scale efforts arising from individual labs. Most recently, a large, organized, FDA-funded effort has culminated in the BioCompute standard.
On 26 October 1990, Roger Tsien, Pepi Ross, Margaret Fahnestock and Allan J Johnston filed a patent describing stepwise ("base-by-base") sequencing with removable 3' blockers on DNA arrays (blots and single DNA molecules).
In 1996, Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm published their method of pyrosequencing.
On 1 April 1997, Pascal Mayer and Laurent Farinelli submitted patents to the World Intellectual Property Organization describing DNA colony sequencing. The DNA sample preparation and random surface-polymerase chain reaction (PCR) arraying methods described in this patent, coupled to Roger Tsien et al.'s "base-by-base" sequencing method, is now implemented in Illumina's Hi-Seq genome sequencers.
In 1998, Phil Green and Brent Ewing of the University of Washington described their phred quality score for sequencer data analysis, a landmark analysis technique that gained widespread adoption, and which is still the most common metric for assessing the accuracy of a sequencing platform.
Lynx Therapeutics published and marketed massively parallel signature sequencing (MPSS), in 2000. This method incorporated a parallelized, adapter/ligation-mediated, bead-based sequencing technology and served as the first commercially available "next-generation" sequencing method, though no DNA sequencers were sold to independent laboratories.
Basic methods
Maxam-Gilbert sequencing
Allan Maxam and Walter Gilbert published a DNA sequencing method in 1977 based on chemical modification of DNA and subsequent cleavage at specific bases. Also known as chemical sequencing, this method allowed purified samples of double-stranded DNA to be used without further cloning. This method's use of radioactive labeling and its technical complexity discouraged extensive use after refinements in the Sanger methods had been made.
Maxam-Gilbert sequencing requires radioactive labeling at one 5' end of the DNA and purification of the DNA fragment to be sequenced. Chemical treatment then generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred.
This method is mostly obsolete as of 2023.
Chain-termination methods
The chain-termination method developed by Frederick Sanger and coworkers in 1977 soon became the method of choice, owing to its relative ease and reliability. When invented, the chain-terminator method used fewer toxic chemicals and lower amounts of radioactivity than the Maxam and Gilbert method. Because of its comparative ease, the Sanger method was soon automated and was the method used in the first generation of DNA sequencers.
Sanger sequencing is the method which prevailed from the 1980s until the mid-2000s. Over that period, great advances were made in the technique, such as fluorescent labelling, capillary electrophoresis, and general automation. These developments allowed much more efficient sequencing, leading to lower costs. The Sanger method, in mass production form, is the technology which produced the first human genome in 2001, ushering in the age of genomics. However, later in the decade, radically different approaches reached the market, bringing the cost per genome down from $100 million in 2001 to $10,000 in 2011.
Sequencing by synthesis
The objective for sequential sequencing by synthesis (SBS) is to determine the sequencing of a DNA sample by detecting the incorporation of a nucleotide by a DNA polymerase. An engineered polymerase is used to synthesize a copy of a single strand of DNA and the incorporation of each nucleotide is monitored. The principle of real-time sequencing by synthesis was first described in 1993 with improvements published some years later. The key parts are highly similar for all embodiments of SBS and includes (1) amplification of DNA (to enhance the subsequent signal) and attach the DNA to be sequenced to a solid support, (2) generation of single stranded DNA on the solid support, (3) incorporation of nucleotides using an engineered polymerase and (4) real-time detection of the incorporation of nucleotide The steps 3-4 are repeated and the sequence is assembled from the signals obtained in step 4. This principle of real-time sequencing-by-synthesis has been used for almost all massive parallel sequencing instruments, including 454, PacBio, IonTorrent, Illumina and MGI.
Large-scale sequencing and de novo sequencing
Large-scale sequencing often aims at sequencing very long DNA pieces, such as whole chromosomes, although large-scale sequencing can also be used to generate very large numbers of short sequences, such as found in phage display. For longer targets such as chromosomes, common approaches consist of cutting (with restriction enzymes) or shearing (with mechanical forces) large DNA fragments into shorter DNA fragments. The fragmented DNA may then be cloned into a DNA vector and amplified in a bacterial host such as Escherichia coli. Short DNA fragments purified from individual bacterial colonies are individually sequenced and assembled electronically into one long, contiguous sequence. Studies have shown that adding a size selection step to collect DNA fragments of uniform size can improve sequencing efficiency and accuracy of the genome assembly. In these studies, automated sizing has proven to be more reproducible and precise than manual gel sizing.
The term "de novo sequencing" specifically refers to methods used to determine the sequence of DNA with no previously known sequence. De novo translates from Latin as "from the beginning". Gaps in the assembled sequence may be filled by primer walking. The different strategies have different tradeoffs in speed and accuracy; shotgun methods are often used for sequencing large genomes, but its assembly is complex and difficult, particularly with sequence repeats often causing gaps in genome assembly.
Most sequencing approaches use an in vitro cloning step to amplify individual DNA molecules, because their molecular detection methods are not sensitive enough for single molecule sequencing. Emulsion PCR isolates individual DNA molecules along with primer-coated beads in aqueous droplets within an oil phase. A polymerase chain reaction (PCR) then coats each bead with clonal copies of the DNA molecule followed by immobilization for later sequencing. Emulsion PCR is used in the methods developed by Marguilis et al. (commercialized by 454 Life Sciences), Shendure and Porreca et al. (also known as "polony sequencing") and SOLiD sequencing, (developed by Agencourt, later Applied Biosystems, now Life Technologies). Emulsion PCR is also used in the GemCode and Chromium platforms developed by 10x Genomics.
Shotgun sequencing
Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. This method requires the target DNA to be broken into random fragments. After sequencing individual fragments using the chain termination method, the sequences can be reassembled on the basis of their overlapping regions.
High-throughput methods
High-throughput sequencing, which includes next-generation "short-read" and third-generation "long-read" sequencing methods, applies to exome sequencing, genome sequencing, genome resequencing, transcriptome profiling (RNA-Seq), DNA-protein interactions (ChIP-sequencing), and epigenome characterization.
The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences concurrently. High-throughput sequencing technologies are intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing as many as 500,000 sequencing-by-synthesis operations may be run in parallel. Such technologies led to the ability to sequence an entire human genome in as little as one day. , corporate leaders in the development of high-throughput sequencing products included Illumina, Qiagen and ThermoFisher Scientific.
Long-read sequencing methods
Single molecule real time (SMRT) sequencing
SMRT sequencing is based on the sequencing by synthesis approach. The DNA is synthesized in zero-mode wave-guides (ZMWs) – small well-like containers with the capturing tools located at the bottom of the well. The sequencing is performed with use of unmodified polymerase (attached to the ZMW bottom) and fluorescently labelled nucleotides flowing freely in the solution. The wells are constructed in a way that only the fluorescence occurring by the bottom of the well is detected. The fluorescent label is detached from the nucleotide upon its incorporation into the DNA strand, leaving an unmodified DNA strand. According to Pacific Biosciences (PacBio), the SMRT technology developer, this methodology allows detection of nucleotide modifications (such as cytosine methylation). This happens through the observation of polymerase kinetics. This approach allows reads of 20,000 nucleotides or more, with average read lengths of 5 kilobases. In 2015, Pacific Biosciences announced the launch of a new sequencing instrument called the Sequel System, with 1 million ZMWs compared to 150,000 ZMWs in the PacBio RS II instrument. SMRT sequencing is referred to as "third-generation" or "long-read" sequencing.
Nanopore DNA sequencing
The DNA passing through the nanopore changes its ion current. This change is dependent on the shape, size and length of the DNA sequence. Each type of the nucleotide blocks the ion flow through the pore for a different period of time. The method does not require modified nucleotides and is performed in real time. Nanopore sequencing is referred to as "third-generation" or "long-read" sequencing, along with SMRT sequencing.
Early industrial research into this method was based on a technique called 'exonuclease sequencing', where the readout of electrical signals occurred as nucleotides passed by alpha(α)-hemolysin pores covalently bound with cyclodextrin. However the subsequent commercial method, 'strand sequencing', sequenced DNA bases in an intact strand.
Two main areas of nanopore sequencing in development are solid state nanopore sequencing, and protein based nanopore sequencing. Protein nanopore sequencing utilizes membrane protein complexes such as α-hemolysin, MspA (Mycobacterium smegmatis Porin A) or CssG, which show great promise given their ability to distinguish between individual and groups of nucleotides. In contrast, solid-state nanopore sequencing utilizes synthetic materials such as silicon nitride and aluminum oxide and it is preferred for its superior mechanical ability and thermal and chemical stability. The fabrication method is essential for this type of sequencing given that the nanopore array can contain hundreds of pores with diameters smaller than eight nanometers.
The concept originated from the idea that single stranded DNA or RNA molecules can be electrophoretically driven in a strict linear sequence through a biological pore that can be less than eight nanometers, and can be detected given that the molecules release an ionic current while moving through the pore. The pore contains a detection region capable of recognizing different bases, with each base generating various time specific signals corresponding to the sequence of bases as they cross the pore which are then evaluated. Precise control over the DNA transport through the pore is crucial for success. Various enzymes such as exonucleases and polymerases have been used to moderate this process by positioning them near the pore's entrance.
Short-read sequencing methods
Massively parallel signature sequencing (MPSS)
The first of the high-throughput sequencing technologies, massively parallel signature sequencing (or MPSS, also called next generation sequencing), was developed in the 1990s at Lynx Therapeutics, a company founded in 1992 by Sydney Brenner and Sam Eletr. MPSS was a bead-based method that used a complex approach of adapter ligation followed by adapter decoding, reading the sequence in increments of four nucleotides. This method made it susceptible to sequence-specific bias or loss of specific sequences. Because the technology was so complex, MPSS was only performed 'in-house' by Lynx Therapeutics and no DNA sequencing machines were sold to independent laboratories. Lynx Therapeutics merged with Solexa (later acquired by Illumina) in 2004, leading to the development of sequencing-by-synthesis, a simpler approach acquired from Manteia Predictive Medicine, which rendered MPSS obsolete. However, the essential properties of the MPSS output were typical of later high-throughput data types, including hundreds of thousands of short DNA sequences. In the case of MPSS, these were typically used for sequencing cDNA for measurements of gene expression levels.
Polony sequencing
The polony sequencing method, developed in the laboratory of George M. Church at Harvard, was among the first high-throughput sequencing systems and was used to sequence a full E. coli genome in 2005. It combined an in vitro paired-tag library with emulsion PCR, an automated microscope, and ligation-based sequencing chemistry to sequence an E. coli genome at an accuracy of >99.9999% and a cost approximately 1/9 that of Sanger sequencing. The technology was licensed to Agencourt Biosciences, subsequently spun out into Agencourt Personal Genomics, and eventually incorporated into the Applied Biosystems SOLiD platform. Applied Biosystems was later acquired by Life Technologies, now part of Thermo Fisher Scientific.
454 pyrosequencing
A parallelized version of pyrosequencing was developed by 454 Life Sciences, which has since been acquired by Roche Diagnostics. The method amplifies DNA inside water droplets in an oil solution (emulsion PCR), with each droplet containing a single DNA template attached to a single primer-coated bead that then forms a clonal colony. The sequencing machine contains many picoliter-volume wells each containing a single bead and sequencing enzymes. Pyrosequencing uses luciferase to generate light for detection of the individual nucleotides added to the nascent DNA, and the combined data are used to generate sequence reads. This technology provides intermediate read length and price per base compared to Sanger sequencing on one end and Solexa and SOLiD on the other.
Illumina (Solexa) sequencing
Solexa, now part of Illumina, was founded by Shankar Balasubramanian and David Klenerman in 1998, and developed a sequencing method based on reversible dye-terminators technology, and engineered polymerases. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed internally at Solexa by those named on the relevant patents. In 2004, Solexa acquired the company Manteia Predictive Medicine in order to gain a massively parallel sequencing technology invented in 1997 by Pascal Mayer and Laurent Farinelli. It is based on "DNA clusters" or "DNA colonies", which involves the clonal amplification of DNA on a surface. The cluster technology was co-acquired with Lynx Therapeutics of California. Solexa Ltd. later merged with Lynx to form Solexa Inc.
In this method, DNA molecules and primers are first attached on a slide or flow cell and amplified with polymerase so that local clonal DNA colonies, later coined "DNA clusters", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. A camera takes images of the fluorescently labeled nucleotides. Then the dye, along with the terminal 3' blocker, is chemically removed from the DNA, allowing for the next cycle to begin. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera.
Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity. With an optimal configuration, the ultimately reachable instrument throughput is thus dictated solely by the analog-to-digital conversion rate of the camera, multiplied by the number of cameras and divided by the number of pixels per DNA colony required for visualizing them optimally (approximately 10 pixels/colony). In 2012, with cameras operating at more than 10 MHz A/D conversion rates and available optics, fluidics and enzymatics, throughput can be multiples of 1 million nucleotides/second, corresponding roughly to 1 human genome equivalent at 1x coverage per hour per instrument, and 1 human genome re-sequenced (at approx. 30x) per day per instrument (equipped with a single camera).
Combinatorial probe anchor synthesis (cPAS)
This method is an upgraded modification to combinatorial probe anchor ligation technology (cPAL) described by Complete Genomics which has since become part of Chinese genomics company BGI in 2013. The two companies have refined the technology to allow for longer read lengths, reaction time reductions and faster time to results. In addition, data are now generated as contiguous full-length reads in the standard FASTQ file format and can be used as-is in most short-read-based bioinformatics analysis pipelines.
The two technologies that form the basis for this high-throughput sequencing technology are DNA nanoballs (DNB) and patterned arrays for nanoball attachment to a solid surface. DNA nanoballs are simply formed by denaturing double stranded, adapter ligated libraries and ligating the forward strand only to a splint oligonucleotide to form a ssDNA circle. Faithful copies of the circles containing the DNA insert are produced utilizing Rolling Circle Amplification that generates approximately 300–500 copies. The long strand of ssDNA folds upon itself to produce a three-dimensional nanoball structure that is approximately 220 nm in diameter. Making DNBs replaces the need to generate PCR copies of the library on the flow cell and as such can remove large proportions of duplicate reads, adapter-adapter ligations and PCR induced errors.
The patterned array of positively charged spots is fabricated through photolithography and etching techniques followed by chemical modification to generate a sequencing flow cell. Each spot on the flow cell is approximately 250 nm in diameter, are separated by 700 nm (centre to centre) and allows easy attachment of a single negatively charged DNB to the flow cell and thus reducing under or over-clustering on the flow cell.
Sequencing is then performed by addition of an oligonucleotide probe that attaches in combination to specific sites within the DNB. The probe acts as an anchor that then allows one of four single reversibly inactivated, labelled nucleotides to bind after flowing across the flow cell. Unbound nucleotides are washed away before laser excitation of the attached labels then emit fluorescence and signal is captured by cameras that is converted to a digital output for base calling. The attached base has its terminator and label chemically cleaved at completion of the cycle. The cycle is repeated with another flow of free, labelled nucleotides across the flow cell to allow the next nucleotide to bind and have its signal captured. This process is completed a number of times (usually 50 to 300 times) to determine the sequence of the inserted piece of DNA at a rate of approximately 40 million nucleotides per second as of 2018.
SOLiD sequencing
Applied Biosystems' (now a Life Technologies brand) SOLiD technology employs sequencing by ligation. Here, a pool of all possible oligonucleotides of a fixed length are labeled according to the sequenced position. Oligonucleotides are annealed and ligated; the preferential ligation by DNA ligase for matching sequences results in a signal informative of the nucleotide at that position. Each base in the template is sequenced twice, and the resulting data are decoded according to the 2 base encoding scheme used in this method. Before sequencing, the DNA is amplified by emulsion PCR. The resulting beads, each containing single copies of the same DNA molecule, are deposited on a glass slide. The result is sequences of quantities and lengths comparable to Illumina sequencing. This sequencing by ligation method has been reported to have some issue sequencing palindromic sequences.
Ion Torrent semiconductor sequencing
Ion Torrent Systems Inc. (now owned by Life Technologies) developed a system based on using standard sequencing chemistry, but with a novel, semiconductor-based detection system. This method of sequencing is based on the detection of hydrogen ions that are released during the polymerisation of DNA, as opposed to the optical methods used in other sequencing systems. A microwell containing a template DNA strand to be sequenced is flooded with a single type of nucleotide. If the introduced nucleotide is complementary to the leading template nucleotide it is incorporated into the growing complementary strand. This causes the release of a hydrogen ion that triggers a hypersensitive ion sensor, which indicates that a reaction has occurred. If homopolymer repeats are present in the template sequence, multiple nucleotides will be incorporated in a single cycle. This leads to a corresponding number of released hydrogens and a proportionally higher electronic signal.
DNA nanoball sequencing
DNA nanoball sequencing is a type of high throughput sequencing technology used to determine the entire genomic sequence of an organism. The company Complete Genomics uses this technology to sequence samples submitted by independent researchers. The method uses rolling circle replication to amplify small fragments of genomic DNA into DNA nanoballs. Unchained sequencing by ligation is then used to determine the nucleotide sequence. This method of DNA sequencing allows large numbers of DNA nanoballs to be sequenced per run and at low reagent costs compared to other high-throughput sequencing platforms. However, only short sequences of DNA are determined from each DNA nanoball which makes mapping the short reads to a reference genome difficult.
Heliscope single molecule sequencing
Heliscope sequencing is a method of single-molecule sequencing developed by Helicos Biosciences. It uses DNA fragments with added poly-A tail adapters which are attached to the flow cell surface. The next steps involve extension-based sequencing with cyclic washes of the flow cell with fluorescently labeled nucleotides (one nucleotide type at a time, as with the Sanger method). The reads are performed by the Heliscope sequencer. The reads are short, averaging 35 bp. What made this technology especially novel was that it was the first of its class to sequence non-amplified DNA, thus preventing any read errors associated with amplification steps. In 2009 a human genome was sequenced using the Heliscope, however in 2012 the company went bankrupt.
Microfluidic Systems
There are two main microfluidic systems that are used to sequence DNA; droplet based microfluidics and digital microfluidics. Microfluidic devices solve many of the current limitations of current sequencing arrays.
Abate et al. studied the use of droplet-based microfluidic devices for DNA sequencing. These devices have the ability to form and process picoliter sized droplets at the rate of thousands per second. The devices were created from polydimethylsiloxane (PDMS) and used Forster resonance energy transfer, FRET assays to read the sequences of DNA encompassed in the droplets. Each position on the array tested for a specific 15 base sequence.
Fair et al. used digital microfluidic devices to study DNA pyrosequencing. Significant advantages include the portability of the device, reagent volume, speed of analysis, mass manufacturing abilities, and high throughput. This study provided a proof of concept showing that digital devices can be used for pyrosequencing; the study included using synthesis, which involves the extension of the enzymes and addition of labeled nucleotides.
Boles et al. also studied pyrosequencing on digital microfluidic devices. They used an electro-wetting device to create, mix, and split droplets. The sequencing uses a three-enzyme protocol and DNA templates anchored with magnetic beads. The device was tested using two protocols and resulted in 100% accuracy based on raw pyrogram levels. The advantages of these digital microfluidic devices include size, cost, and achievable levels of functional integration.
DNA sequencing research, using microfluidics, also has the ability to be applied to the sequencing of RNA, using similar droplet microfluidic techniques, such as the method, inDrops. This shows that many of these DNA sequencing techniques will be able to be applied further and be used to understand more about genomes and transcriptomes.
Methods in development
DNA sequencing methods currently under development include reading the sequence as a DNA strand transits through nanopores (a method that is now commercial but subsequent generations such as solid-state nanopores are still in development), and microscopy-based techniques, such as atomic force microscopy or transmission electron microscopy that are used to identify the positions of individual nucleotides within long DNA fragments (>5,000 bp) by nucleotide labeling with heavier elements (e.g., halogens) for visual detection and recording.
Third generation technologies aim to increase throughput and decrease the time to result and cost by eliminating the need for excessive reagents and harnessing the processivity of DNA polymerase.
Tunnelling currents DNA sequencing
Another approach uses measurements of the electrical tunnelling currents across single-strand DNA as it moves through a channel. Depending on its electronic structure, each base affects the tunnelling current differently, allowing differentiation between different bases.
The use of tunnelling currents has the potential to sequence orders of magnitude faster than ionic current methods and the sequencing of several DNA oligomers and micro-RNA has already been achieved.
Sequencing by hybridization
Sequencing by hybridization is a non-enzymatic method that uses a DNA microarray. A single pool of DNA whose sequence is to be determined is fluorescently labeled and hybridized to an array containing known sequences. Strong hybridization signals from a given spot on the array identifies its sequence in the DNA being sequenced.
This method of sequencing utilizes binding characteristics of a library of short single stranded DNA molecules (oligonucleotides), also called DNA probes, to reconstruct a target DNA sequence. Non-specific hybrids are removed by washing and the target DNA is eluted. Hybrids are re-arranged such that the DNA sequence can be reconstructed. The benefit of this sequencing type is its ability to capture a large number of targets with a homogenous coverage. A large number of chemicals and starting DNA is usually required. However, with the advent of solution-based hybridization, much less equipment and chemicals are necessary.
Sequencing with mass spectrometry
Mass spectrometry may be used to determine DNA sequences. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry, or MALDI-TOF MS, has specifically been investigated as an alternative method to gel electrophoresis for visualizing DNA fragments. With this method, DNA fragments generated by chain-termination sequencing reactions are compared by mass rather than by size. The mass of each nucleotide is different from the others and this difference is detectable by mass spectrometry. Single-nucleotide mutations in a fragment can be more easily detected with MS than by gel electrophoresis alone. MALDI-TOF MS can more easily detect differences between RNA fragments, so researchers may indirectly sequence DNA with MS-based methods by converting it to RNA first.
The higher resolution of DNA fragments permitted by MS-based methods is of special interest to researchers in forensic science, as they may wish to find single-nucleotide polymorphisms in human DNA samples to identify individuals. These samples may be highly degraded so forensic researchers often prefer mitochondrial DNA for its higher stability and applications for lineage studies. MS-based sequencing methods have been used to compare the sequences of human mitochondrial DNA from samples in a Federal Bureau of Investigation database and from bones found in mass graves of World War I soldiers.
Early chain-termination and TOF MS methods demonstrated read lengths of up to 100 base pairs. Researchers have been unable to exceed this average read size; like chain-termination sequencing alone, MS-based DNA sequencing may not be suitable for large de novo sequencing projects. Even so, a recent study did use the short sequence reads and mass spectroscopy to compare single-nucleotide polymorphisms in pathogenic Streptococcus strains.
Microfluidic Sanger sequencing
In microfluidic Sanger sequencing the entire thermocycling amplification of DNA fragments as well as their separation by electrophoresis is done on a single glass wafer (approximately 10 cm in diameter) thus reducing the reagent usage as well as cost. In some instances researchers have shown that they can increase the throughput of conventional sequencing through the use of microchips. Research will still need to be done in order to make this use of technology effective.
Microscopy-based techniques
This approach directly visualizes the sequence of DNA molecules using electron microscopy. The first identification of DNA base pairs within intact DNA molecules by enzymatically incorporating modified bases, which contain atoms of increased atomic number, direct visualization and identification of individually labeled bases within a synthetic 3,272 base-pair DNA molecule and a 7,249 base-pair viral genome has been demonstrated.
RNAP sequencing
This method is based on use of RNA polymerase (RNAP), which is attached to a polystyrene bead. One end of DNA to be sequenced is attached to another bead, with both beads being placed in optical traps. RNAP motion during transcription brings the beads in closer and their relative distance changes, which can then be recorded at a single nucleotide resolution. The sequence is deduced based on the four readouts with lowered concentrations of each of the four nucleotide types, similarly to the Sanger method. A comparison is made between regions and sequence information is deduced by comparing the known sequence regions to the unknown sequence regions.
In vitro virus high-throughput sequencing
A method has been developed to analyze full sets of protein interactions using a combination of 454 pyrosequencing and an in vitro virus mRNA display method. Specifically, this method covalently links proteins of interest to the mRNAs encoding them, then detects the mRNA pieces using reverse transcription PCRs. The mRNA may then be amplified and sequenced. The combined method was titled IVV-HiTSeq and can be performed under cell-free conditions, though its results may not be representative of in vivo conditions.
Market share
While there are many different ways to sequence DNA, only a few dominate the market. In 2022, Illumina had about 80% of the market; the rest of the market is taken by only a few players (PacBio, Oxford, 454, MGI)
Sample preparation
The success of any DNA sequencing protocol relies upon the DNA or RNA sample extraction and preparation from the biological material of interest.
A successful DNA extraction will yield a DNA sample with long, non-degraded strands.
A successful RNA extraction will yield a RNA sample that should be converted to complementary DNA (cDNA) using reverse transcriptase—a DNA polymerase that synthesizes a complementary DNA based on existing strands of RNA in a PCR-like manner. Complementary DNA can then be processed the same way as genomic DNA.
After DNA or RNA extraction, samples may require further preparation depending on the sequencing method. For Sanger sequencing, either cloning procedures or PCR are required prior to sequencing. In the case of next-generation sequencing methods, library preparation is required before processing. Assessing the quality and quantity of nucleic acids both after extraction and after library preparation identifies degraded, fragmented, and low-purity samples and yields high-quality sequencing data.
Development initiatives
In October 2006, the X Prize Foundation established an initiative to promote the development of full genome sequencing technologies, called the Archon X Prize, intending to award $10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $10,000 (US) per genome."
Each year the National Human Genome Research Institute, or NHGRI, promotes grants for new research and developments in genomics. 2010 grants and 2011 candidates include continuing work in microfluidic, polony and base-heavy sequencing methodologies.
Computational challenges
The sequencing technologies described here produce raw data that needs to be assembled into longer sequences such as complete genomes (sequence assembly). There are many computational challenges to achieve this, such as the evaluation of the raw sequence data which is done by programs and algorithms such as Phred and Phrap. Other challenges have to deal with repetitive sequences that often prevent complete genome assemblies because they occur in many places of the genome. As a consequence, many sequences may not be assigned to particular chromosomes. The production of raw sequence data is only the beginning of its detailed bioinformatical analysis. Yet new methods for sequencing and correcting sequencing errors were developed.
Read trimming
Sometimes, the raw reads produced by the sequencer are correct and precise only in a fraction of their length. Using the entire read may introduce artifacts in the downstream analyses like genome assembly, SNP calling, or gene expression estimation. Two classes of trimming programs have been introduced, based on the window-based or the running-sum classes of algorithms. This is a partial list of the trimming algorithms currently available, specifying the algorithm class they belong to:
Ethical issues
Human genetics have been included within the field of bioethics since the early 1970s and the growth in the use of DNA sequencing (particularly high-throughput sequencing) has introduced a number of ethical issues. One key issue is the ownership of an individual's DNA and the data produced when that DNA is sequenced. Regarding the DNA molecule itself, the leading legal case on this topic, Moore v. Regents of the University of California (1990) ruled that individuals have no property rights to discarded cells or any profits made using these cells (for instance, as a patented cell line). However, individuals have a right to informed consent regarding removal and use of cells. Regarding the data produced through DNA sequencing, Moore gives the individual no rights to the information derived from their DNA.
As DNA sequencing becomes more widespread, the storage, security and sharing of genomic data has also become more important. For instance, one concern is that insurers may use an individual's genomic data to modify their quote, depending on the perceived future health of the individual based on their DNA. In May 2008, the Genetic Information Nondiscrimination Act (GINA) was signed in the United States, prohibiting discrimination on the basis of genetic information with respect to health insurance and employment. In 2012, the US Presidential Commission for the Study of Bioethical Issues reported that existing privacy legislation for DNA sequencing data such as GINA and the Health Insurance Portability and Accountability Act were insufficient, noting that whole-genome sequencing data was particularly sensitive, as it could be used to identify not only the individual from which the data was created, but also their relatives.
In most of the United States, DNA that is "abandoned", such as that found on a licked stamp or envelope, coffee cup, cigarette, chewing gum, household trash, or hair that has fallen on a public sidewalk, may legally be collected and sequenced by anyone, including the police, private investigators, political opponents, or people involved in paternity disputes. As of 2013, eleven states have laws that can be interpreted to prohibit "DNA theft".
Ethical issues have also been raised by the increasing use of genetic variation screening, both in newborns, and in adults by companies such as 23andMe. It has been asserted that screening for genetic variations can be harmful, increasing anxiety in individuals who have been found to have an increased risk of disease. For example, in one case noted in Time, doctors screening an ill baby for genetic variants chose not to inform the parents of an unrelated variant linked to dementia due to the harm it would cause to the parents. However, a 2011 study in The New England Journal of Medicine has shown that individuals undergoing disease risk profiling did not show increased levels of anxiety. Also, the development of Next Generation sequencing technologies such as Nanopore based sequencing has also raised further ethical concerns.
See also
Circular consensus sequencing
Linked-read sequencing
Notes
References
External links
A wikibook on next generation sequencing
Biotechnology
DNA
Genetic mapping
Molecular biology
Molecular biology techniques
1970 introductions
1970 in biology
1970 in biotechnology
1970 in science
1998 in technology | DNA sequencing | [
"Chemistry",
"Biology"
] | 11,394 | [
"Molecular biology techniques",
"DNA sequencing",
"Molecular biology"
] |
1,158,235 | https://en.wikipedia.org/wiki/Debye%20length | In plasmas and electrolytes, the Debye length (Debye radius or Debye–Hückel screening length), is a measure of a charge carrier's net electrostatic effect in a solution and how far its electrostatic effect persists. With each Debye length the charges are increasingly electrically screened and the electric potential decreases in magnitude by e. A Debye sphere is a volume whose radius is the Debye length. Debye length is an important parameter in plasma physics, electrolytes, and colloids (DLVO theory).
The Debye length for a plasma consisting of particles with density , charge , and temperature is given by . The corresponding Debye screening wavenumber is given by . The analogous quantities at very low temperatures () are known as the Thomas–Fermi length and the Thomas–Fermi wavenumber, respectively. They are of interest in describing the behaviour of electrons in metals at room temperature and warm dense matter.
The Debye length is named after the Dutch-American physicist and chemist Peter Debye (1884–1966), a Nobel laureate in Chemistry.
Physical origin
The Debye length arises naturally in the description of a substance with mobile charges, such as a plasma, electrolyte solution, or semiconductor. In such a substance, charges naturally screen out electric fields induced in the substance, with a certain characteristic length. That characteristic length is the Debye length.
Its value can be mathematically derived for a system of different species of charged particles, where the -th species carries charge and has concentration at position .
The distribution of charged particles within this medium gives rise to an electric potential that satisfies Poisson's equation:
where is the medium's permitivity, and is any static charge density that is not part of the medium.
The mobile charges don't only affect , but are also affected by due to the corresponding Coulomb force, .
If we further assume the system to be at temperature , then the charge concentration may be considered, under the assumptions of mean field theory, to tend toward the Boltzmann distribution,
where is the Boltzmann constant and where is the mean
concentration of charges of species .
Identifying the instantaneous concentrations and potential in the Poisson equation with their mean-field counterparts in the Boltzmann distribution yields the Poisson–Boltzmann equation:
Solutions to this nonlinear equation are known for some simple systems. Solutions for more general systems may be obtained in the high-temperature (weak coupling) limit, , by Taylor expanding the exponential:
This approximation yields the linearized Poisson–Boltzmann equation
which also is known as the Debye–Hückel equation:
The second term on the right-hand side vanishes for systems that are electrically neutral. The term in parentheses divided by has the units of an inverse length squared, and by
dimensional analysis leads to the definition of the characteristic length scale:
Substituting this length scale into the Debye–Hückel equation and neglecting the second and third terms on the right side yields the much simplified form . As the only characteristic length scale in the Debye–Hückel equation, sets the scale for variations in the potential and in the concentrations of charged species. All charged species contribute to the Debye length in the same way, regardless of the sign of their charges.
To illustrate Debye screening, one can consider the example of a point charge placed in a plasma. The external charge density is then , and the resulting potential is
The bare Coulomb potential is exponentially screened by the medium, over a distance of the Debye length: this is called Debye screening or shielding.
The Debye length may be expressed in terms of the Bjerrum length as
where is the integer charge number that relates the charge on the -th ionic
species to the elementary charge .
In a plasma
For a weakly collisional plasma, Debye shielding can be introduced in a very intuitive way by taking into account the granular character of such a plasma. Let us imagine a sphere about one of its electrons, and compare the number of electrons crossing this sphere with and without Coulomb repulsion. With repulsion, this number is smaller. Therefore, according to Gauss theorem, the apparent charge of the first electron is smaller than in the absence of repulsion. The larger the sphere radius, the larger is the number of deflected electrons, and the smaller the apparent charge: this is Debye shielding. Since the global deflection of particles includes the contributions of many other ones, the density of the electrons does not change, at variance with the shielding at work next to a Langmuir probe (Debye sheath). Ions bring a similar contribution to shielding, because of the attractive Coulombian deflection of charges with opposite signs.
This intuitive picture leads to an effective calculation of Debye shielding (see section II.A.2 of ). The assumption of a Boltzmann distribution is not necessary in this calculation: it works for whatever particle distribution function. The calculation also avoids approximating weakly collisional plasmas as continuous media. An N-body calculation reveals that the bare Coulomb acceleration of a particle by another one is modified by a contribution mediated by all other particles, a signature of Debye shielding (see section 8 of ). When starting from random particle positions, the typical time-scale for shielding to set in is the time for a thermal particle to cross a Debye length, i.e. the inverse of the plasma frequency. Therefore in a weakly collisional plasma, collisions play an essential role by bringing a cooperative self-organization process: Debye shielding. This shielding is important to get a finite diffusion coefficient in the calculation of Coulomb scattering (Coulomb collision).
In a non-isothermic plasma, the temperatures for electrons and heavy species may differ while the background medium may be treated as the vacuum and the Debye length is
where
λD is the Debye length,
ε0 is the permittivity of free space,
kB is the Boltzmann constant,
qe is the charge of an electron,
Te and Ti are the temperatures of the electrons and ions, respectively,
ne is the density of electrons,
nj is the density of atomic species j, with positive ionic charge zjqe
Even in quasineutral cold plasma, where ion contribution virtually seems to be larger due to lower ion temperature, the ion term is actually often dropped, giving
although this is only valid when the mobility of ions is negligible compared to the process's timescale. A useful form of this equation is
where is in cm, in eV, and in 1/cm.
Typical values
In space plasmas where the electron density is relatively low, the Debye length may reach macroscopic values, such as in the magnetosphere, solar wind, interstellar medium and intergalactic medium. See the table here below:
In an electrolyte solution
In an electrolyte or a colloidal suspension, the Debye length for a monovalent electrolyte is usually denoted with symbol κ−1
where
I is the ionic strength of the electrolyte in number/m3 units,
ε0 is the permittivity of free space,
εr is the dielectric constant,
kB is the Boltzmann constant,
T is the absolute temperature in kelvins,
is the elementary charge,
or, for a symmetric monovalent electrolyte,
where
R is the gas constant,
F is the Faraday constant,
C0 is the electrolyte concentration in molar units (M or mol/L).
Alternatively,
where is the Bjerrum length of the medium in nm,
and the factor derives from transforming unit volume from cubic dm to cubic nm.
For deionized water at room temperature, at pH=7, λB ≈ 1μm.
At room temperature (), one can consider in water the relation:
where
κ−1 is expressed in nanometres (nm)
I is the ionic strength expressed in molar (M or mol/L)
There is a method of estimating an approximate value of the Debye length in liquids using conductivity, which is described in ISO Standard, and the book.
In semiconductors
The Debye length has become increasingly significant in the modeling of solid state devices as improvements in lithographic technologies have enabled smaller geometries.
The Debye length of semiconductors is given:
where
ε is the dielectric constant,
kB is the Boltzmann constant,
T is the absolute temperature in kelvins,
q is the elementary charge, and
Ndop is the net density of dopants (either donors or acceptors).
When doping profiles exceed the Debye length, majority carriers no longer behave according to the distribution of the dopants. Instead, a measure of the profile of the doping gradients provides an "effective" profile that better matches the profile of the majority carrier density.
In the context of solids, Thomas–Fermi screening length may be required instead of Debye length.
See also
Bjerrum length
Debye–Falkenhagen effect
Plasma oscillation
Shielding effect
Screening effect
References
Further reading
Electricity
Electronics concepts
Colloidal chemistry
Plasma parameters
Electrochemistry
Length
Peter Debye | Debye length | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,922 | [
"Scalar physical quantities",
"Colloidal chemistry",
"Physical quantities",
"Distance",
"Quantity",
"Colloids",
"Size",
"Surface science",
"Electrochemistry",
"Length",
"Wikipedia categories named after physical quantities"
] |
1,159,033 | https://en.wikipedia.org/wiki/Boule%20%28crystal%29 | A boule is a single-crystal ingot produced by synthetic means.
A boule of silicon is the starting material for most of the integrated circuits used today. In the semiconductor industry synthetic boules can be made by a number of methods, such as the Bridgman technique and the Czochralski process, which result in a cylindrical rod of material.
In the Czochralski process a seed crystal is required to create a larger crystal, or ingot. This seed crystal is dipped into the pure molten silicon and slowly extracted. The molten silicon grows on the seed crystal in a crystalline fashion. As the seed is extracted the silicon solidifies and eventually a large, cylindrical boule is produced.
A semiconductor crystal boule is normally cut into circular wafers using an inside hole diamond saw or diamond wire saw, and each wafer is lapped and polished to provide substrates suitable for the fabrication of semiconductor devices on its surface.
The process is also used to create sapphires, which are used for substrates in the production of blue and white LEDs, optical windows in special applications and as the protective covers for watches.
References
Crystals
Semiconductor growth | Boule (crystal) | [
"Chemistry",
"Materials_science"
] | 234 | [
"Crystallography",
"Crystals"
] |
2,445,044 | https://en.wikipedia.org/wiki/Deep%20reactive-ion%20etching | Deep reactive-ion etching (DRIE) is a special subclass of reactive-ion etching (RIE). It enables highly anisotropic etch process used to create deep penetration, steep-sided holes and trenches in wafers/substrates, typically with high aspect ratios. It was developed for microelectromechanical systems (MEMS), which require these features, but is also used to excavate trenches for high-density capacitors for DRAM and more recently for creating through-silicon vias (TSVs) in advanced 3D wafer level packaging technology.
In DRIE, the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture which breaks the gas molecules into ions. The ions are accelerated towards, and react with the surface of the material being etched, forming another gaseous element. This is known as the chemical part of the reactive ion etching. There is also a physical part, if ions have enough energy, they can knock atoms out of the material to be etched without chemical reaction.
There are two main technologies for high-rate DRIE: cryogenic and Bosch, although the Bosch process is the only recognised production technique. Both Bosch and cryogenic processes can fabricate 90° (truly vertical) walls, but often the walls are slightly tapered, e.g. 88° ("reentrant") or 92° ("retrograde").
Another mechanism is sidewall passivation: SiOxFy functional groups (which originate from sulphur hexafluoride and oxygen etch gases) condense on the sidewalls, and protect them from lateral etching. As a combination of these processes, deep vertical structures can be made.
Cryogenic process
In cryogenic-DRIE, the wafer is chilled to −110 °C (163 K). The low temperature slows down the chemical reaction that produces isotropic etching. However, ions continue to bombard upward-facing surfaces and etch them away. This process produces trenches with highly vertical sidewalls. The primary issues with cryo-DRIE is that the standard masks on substrates crack under the extreme cold, plus etch by-products have a tendency of depositing on the nearest cold surface, i.e. the substrate or electrode.
Bosch process
The Bosch process, named after the German company Robert Bosch GmbH which patented the process, also known as pulsed or time-multiplexed etching, alternates repeatedly between two modes to achieve nearly vertical structures:
A standard, nearly isotropic plasma etch. The plasma contains some ions, which attack the wafer from a nearly vertical direction. Sulfur hexafluoride [SF6] is often used for silicon.
Deposition of a chemically inert passivation layer. (For instance, Octafluorocyclobutane [C4F8] source gas yields a substance similar to Teflon.)
Each phase lasts for several seconds. The passivation layer protects the entire substrate from further chemical attack and prevents further etching. However, during the etching phase, the directional ions that bombard the substrate attack the passivation layer at the bottom of the trench (but not along the sides). They collide with it and sputter it off, exposing the substrate to the chemical etchant.
These etch/deposit steps are repeated many times over resulting in a large number of very small isotropic etch steps taking place only at the bottom of the etched pits. To etch through a 0.5 mm silicon wafer, for example, 100–1000 etch/deposit steps are needed. The two-phase process causes the sidewalls to undulate with an amplitude of about 100–500 nm. The cycle time can be adjusted: short cycles yield smoother walls, and long cycles yield a higher etch rate.
Applications
Etching depth typically depends on the application:
in DRAM memory circuits, capacitor trenches may be 10–20 μm deep,
in MEMS, DRIE is used for anything from a few micrometers to 0.5 mm.
in irregular chip dicing, DRIE is used with a novel hybrid soft/hard mask to achieve sub-millimeter etching to dice silicon dies into lego-like pieces with irregular shapes.
in flexible electronics, DRIE is used to make traditional monolithic CMOS devices flexible by reducing the thickness of silicon substrates to few to tens of micrometers.
DRIE is distinguished from RIE by its etch depth. Practical etch depths for RIE (as used in IC manufacturing) would be limited to around 10 μm at a rate up to 1 μm/min, while DRIE can etch features much greater, up to 600 μm or more with rates up to 20 μm/min or more in some applications.
DRIE of glass requires high plasma power, which makes it difficult to find suitable mask materials for truly deep etching. Polysilicon and nickel are used for 10–50 μm etched depths. In DRIE of polymers, Bosch process with alternating steps of SF6 etching and C4F8 passivation take place. Metal masks can be used, however they are expensive to use since several additional photo and deposition steps are always required. Metal masks are not necessary however on various substrates (Si [up to 800 μm], InP [up to 40 μm] or glass [up to 12 μm]) if using chemically amplified negative resists.
Gallium ion implantation can be used as etch mask in cryo-DRIE. Combined nanofabrication process of focused ion beam and cryo-DRIE was first reported by N Chekurov et al in their article "The fabrication of silicon nanostructures by local gallium implantation and cryogenic deep reactive ion etching".
Precision machinery
DRIE has enabled the use of silicon mechanical components in high-end wristwatches. According to an engineer at Cartier, “There is no limit to geometric shapes with DRIE,”. With DRIE it is possible to obtain an aspect ratio of 30 or more, meaning that a surface can be etched with a vertical-walled trench 30 times deeper than its width.
This has allowed for silicon components to be substituted for some parts which are usually made of steel, such as the hairspring. Silicon is lighter and harder than steel, which carries benefits but makes the manufacturing process more challenging.
See also
Microelectromechanical systems
References
Semiconductor device fabrication
Microtechnology
Etching (microfabrication) | Deep reactive-ion etching | [
"Materials_science",
"Engineering"
] | 1,361 | [
"Semiconductor device fabrication",
"Materials science",
"Microtechnology",
"Etching (microfabrication)"
] |
2,447,137 | https://en.wikipedia.org/wiki/Cosmic%20ray%20spallation | Cosmic ray spallation, also known as the x-process, is a set of naturally occurring nuclear reactions causing nucleosynthesis; it refers to the formation of chemical elements from the impact of cosmic rays on an object. Cosmic rays are highly energetic charged particles from beyond Earth, ranging from protons, alpha particles, and nuclei of many heavier elements. About 1% of cosmic rays also consist of free electrons.
Cosmic rays cause spallation when a ray particle (e.g. a proton) impacts with matter, including other cosmic rays. The result of the collision is the expulsion of particles (protons, neutrons, and alpha particles) from the object hit. This process goes on not only in deep space, but in Earth's upper atmosphere and crustal surface (typically the upper ten meters) due to the ongoing impact of cosmic rays.
The process
Cosmic ray spallation is thought to be responsible for the abundance in the universe of some light elements—lithium, beryllium, and boron—as well as the isotope helium-3. This process (cosmogenic nucleosynthesis) was discovered somewhat by accident during the 1970s: models of Big Bang nucleosynthesis suggested that the amount of deuterium was too large to be consistent with the expansion rate of the universe and there was therefore great interest in processes that could generate deuterium after the Big Bang nucleosynthesis. Cosmic ray spallation was investigated as a possible process to generate deuterium. As it turned out, spallation could not generate much deuterium, but the new studies of spallation showed that this process could generate lithium, beryllium and boron; indeed, isotopes of these elements are over-represented in cosmic ray nuclei, as compared with solar atmospheres (whereas hydrogen and helium are present in about primordial ratios in cosmic rays).
An example of cosmic ray spallation is a neutron hitting a nitrogen-14 nucleus in the Earth's atmosphere, yielding a proton, an alpha particle, and a beryllium-10 nucleus, which eventually decays to boron-10. Alternatively, a proton can hit oxygen-16, yielding two protons, a neutron, and again an alpha particle and a beryllium-10 nucleus. Boron can also be created directly. The beryllium and boron are brought down to the ground by rain. See Cosmogenic nuclide for a list of nuclides produced by cosmic ray spallation.
The x-process in cosmic rays is the primary means of nucleosynthesis for the five stable isotopes of lithium, beryllium, and boron. As the proton–proton chain reaction cannot proceed beyond 4He due to the unbound nature of 5He and 5Li, and the triple-alpha process skips over all species between 4He and 12C, these elements are not produced in the main reactions of stellar nucleosynthesis. In addition, nuclei of these elements (such as 7Li) are relatively weakly bound, resulting in their rapid destruction in stars and no significant accumulation, although new theory suggests that 7Li is generated primarily in novae eruptions. It was thus postulated that another nucleosynthesis process occurring outside stars was necessary to explain their existence in the universe. This process is now known to occur in cosmic rays, where lower temperature and particle density favor reactions leading to the synthesis of lithium, beryllium, and boron.
In addition to the above light elements, tritium and isotopes of aluminium, carbon (carbon-14), phosphorus (phosphorus-32), chlorine, iodine and neon are formed within Solar System materials through cosmic ray spallation, and are termed cosmogenic nuclides. Since they remain trapped in the atmosphere or rock in which they formed, some can be very useful in the dating of materials by cosmogenic radionuclide dating, particularly in the geological field. In formation of a cosmogenic nuclide, a cosmic ray interacts with the nucleus of an in situ Solar System atom, causing cosmic ray spallation. These isotopes are produced within Earth materials such as rocks or soil, in Earth's atmosphere, and in extraterrestrial items such as meteorites. By measuring cosmogenic isotopes, scientists are able to gain insight into a range of geological and astronomical processes. There are both radioactive and stable cosmogenic isotopes. Some of the well-known naturally-occurring radioisotopes are tritium, carbon-14, and phosphorus-32.
The timing of their formation determines whether nuclides formed by cosmic ray spallation are termed primordial or are termed cosmogenic (a nuclide cannot belong to both classes). The stable nuclides of lithium, beryllium, and boron found on Earth are thought to have been formed by the same process as the cosmogenic nuclides but at an earlier time in cosmic ray spallation predominantly before the Solar System's formation, and thus they are by definition primordial nuclides and not cosmogenic. In contrast, the radioactive nuclide beryllium-7 falls into the same light element range but has a half-life too short for it to have been formed before the formation of the Solar System, so that it cannot be a primordial nuclide. Since the cosmic ray spallation route is the most likely source of beryllium-7 in the environment, that isotope is thus cosmogenic.
See also
Cosmic rays
Cosmogenic nuclide
Nuclear fission
Nucleosynthesis
Astrophysics
Spall, or spalling
Spallation
Spallation Neutron Source
ISIS neutron source
PSI Spallation Neutron Source (SINQ)
References
Further reading
External links
California Institute of Technology: The Cosmic Ray Isotope Spectrometer (CRIS)
Department of Physics and Astronomy, University of Leeds: Is the cosmic-ray residence time E^-0.6 (spallation) or E^-(1/3) (anisotropy, turbulence)?
Ultra Heavy Cosmic Ray Propagation Using New Spallation Cross-Section Expressions
Evidence for cosmic ray spallation production of helium and neon found in volcanoes
Cosmic rays
Nucleosynthesis | Cosmic ray spallation | [
"Physics",
"Chemistry"
] | 1,276 | [
"Nuclear fission",
"Physical phenomena",
"Astrophysics",
"Nucleosynthesis",
"Radiation",
"Nuclear physics",
"Nuclear fusion",
"Cosmic rays"
] |
2,447,304 | https://en.wikipedia.org/wiki/Percolation%20test | A percolation test (colloquially called a perc test) is a test to determine the water absorption rate of soil (that is, its capacity for percolation) in preparation for the building of a septic drain field (leach field) or infiltration basin. The results of a percolation test are required to design a septic system properly. In its broadest terms, percolation testing observes how quickly a known volume of water dissipates into the subsoil of a drilled hole of known surface area. While every jurisdiction will have laws regarding the exact calculations for the length of line, depth of pit, etc., the testing procedures are the same.
In general, sandy soil will absorb more water than soil with a high concentration of clay or where the water table is close to the surface.
Testing method
A percolation test consists of digging one or more holes in the soil of the proposed leach field to a specified depth, presoaking the holes by maintaining a high water level in the holes, then running the test by filling the holes to a specific level and timing the drop of the water level as the water percolates into the surrounding soil. There are various empirical formulae for determining the required size of a leach field based on the size of the facility, the percolation test results, and other parameters.
For leach line testing, at least three test holes are drilled or dug by hand, most commonly six to eight inches in diameter. These should be drilled to depths three to six feet below the surface. For better, more conclusive results, five drill holes are used in a pattern of one hole at each corner of the proposed leach field and one test hole in the center. Testing these holes will result in a value with units of minutes per inch. This value is then correlated to a predetermined county health code to establish the exact size of the leach field.
Testing for horizontal pits typically requires five to eight test holes drilled in a straight line or along a common contour from three to ten feet below the surface. This testing is identical to leach line testing, though the result is a different type of septic system established through a different calculation.
Vertical seepage pits differ slightly in testing methods due to their large size, but the primary testing method is essentially the same. A hole, typically three to four feet in diameter, is drilled to a depth of twenty or thirty feet (depending on the local groundwater table). A fire hose is used to fill the pit as quickly as possible, and then, again, its dissipation rate is observed. This rate calculates the size and number of pits necessary for a viable septic system.
Finally, a "deep hole" is drilled to find the water table or to approximately twelve feet (dry) for leach line systems and horizontal seepage pits. Exact depths will again depend on local health codes. In the case of a vertical seepage pit, local groundwater data may be used, or if the drill hole reaches groundwater, the pit will be backfilled again according to the county health code.
Alternatives
Some jurisdictions question the accuracy of a percolation test to assess soil treatment quality and instead utilize soil texture analysis—along with long-term acceptance rates (LTAR)—in place of or in addition to a percolation test.
References
How to Run a Percolation Test
Sewerage
Soil physics | Percolation test | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 692 | [
"Applied and interdisciplinary physics",
"Soil physics",
"Water pollution",
"Sewerage",
"Environmental engineering"
] |
2,448,955 | https://en.wikipedia.org/wiki/Velocity%20potential | A velocity potential is a scalar potential used in potential flow theory. It was introduced by Joseph-Louis Lagrange in 1788.
It is used in continuum mechanics, when a continuum occupies a simply-connected region and is irrotational. In such a case,
where denotes the flow velocity. As a result, can be represented as the gradient of a scalar function :
is known as a velocity potential for .
A velocity potential is not unique. If is a velocity potential, then is also a velocity potential for , where is a scalar function of time and can be constant. Velocity potentials are unique up to a constant, or a function solely of the temporal variable.
The Laplacian of a velocity potential is equal to the divergence of the corresponding flow. Hence if a velocity potential satisfies Laplace equation, the flow is incompressible.
Unlike a stream function, a velocity potential can exist in three-dimensional flow.
Usage in acoustics
In theoretical acoustics, it is often desirable to work with the acoustic wave equation of the velocity potential instead of pressure and/or particle velocity .
Solving the wave equation for either field or field does not necessarily provide a simple answer for the other field. On the other hand, when is solved for, not only is found as given above, but is also easily found—from the (linearised) Bernoulli equation for irrotational and unsteady flow—as
See also
Vorticity
Hamiltonian fluid mechanics
Potential flow
Potential flow around a circular cylinder
Notes
External links
Joukowski Transform Interactive WebApp
Continuum mechanics
Physical quantities
Potentials | Velocity potential | [
"Physics",
"Chemistry",
"Mathematics"
] | 328 | [
"Physical phenomena",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Classical mechanics",
"Physical properties",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
2,449,023 | https://en.wikipedia.org/wiki/Upward%20continuation | Upward continuation is a method used in oil exploration and geophysics to estimate the values of a gravitational or magnetic field by using measurements at a lower elevation and extrapolating upward, assuming continuity. This technique is commonly used to merge different measurements to a common level so as to reduce scatter and allow for easier analysis.
See also
Petroleum geology
References
Upward Continuation in Schlumberger's Oilfield Glossary
Downward Continuation in Schlumberger's Oilfield Glossary
Geophysics
Petroleum geology | Upward continuation | [
"Physics",
"Chemistry"
] | 100 | [
"Applied and interdisciplinary physics",
"Petroleum stubs",
"Petroleum",
"Geophysics",
"Petroleum geology"
] |
2,449,166 | https://en.wikipedia.org/wiki/Index%20ellipsoid | In crystal optics, the index ellipsoid (also known as the optical indicatrix or sometimes as the dielectric ellipsoid) is a geometric construction which concisely represents the refractive indices and associated polarizations of light, as functions of the orientation of the wavefront, in a doubly-refractive crystal (provided that the crystal does not exhibit optical rotation). When this ellipsoid is cut through its center by a plane parallel to the wavefront, the resulting intersection (called a central section or diametral section) is an ellipse whose major and minor semiaxes have lengths equal to the two refractive indices for that orientation of the wavefront, and have the directions of the respective polarizations as expressed by the electric displacement vector . The principal semiaxes of the index ellipsoid are called the principal refractive indices.
It follows from the sectioning procedure that each principal semiaxis of the ellipsoid is generally not the refractive index for propagation in the direction of that semiaxis, but rather the refractive index for propagation perpendicular to that semiaxis, with the vector parallel to that semiaxis (and parallel to the wavefront). Thus the direction of propagation (normal to the wavefront) to which each principal refractive index applies is in the plane perpendicular to the associated principal semiaxis.
Terminology
The index ellipsoid is not to be confused with the index surface, whose radius vector (from the origin) in any direction is indeed the refractive index for propagation in that direction; for a birefringent medium, the index surface is the two-sheeted surface whose two radius vectors in any direction have lengths equal to the major and minor semiaxes of the diametral section of the index ellipsoid by a plane normal to that direction.
If we let denote the principal semiaxes of the index ellipsoid, and choose a Cartesian coordinate system in which these semiaxes are respectively in the , , and directions, the equation of the index ellipsoid is
If the index ellipsoid is triaxial (meaning that its principal semiaxes are all unequal), there are two cutting planes for which the diametral section reduces to a circle. For wavefronts parallel to these planes, all polarizations are permitted and have the same refractive index, hence the same wave speed. The directions normal to these two planes—that is, the directions of a single wave speed for all polarizations—are called the binormal axes or optic axes, and the medium is therefore said to be biaxial. Thus, paradoxically, if the index ellipsoid of a medium is triaxial, the medium itself is called biaxial.
If two of the principal semiaxes of the index ellipsoid are equal (in which case their common length is called the ordinary index, and the third length the extraordinary index), the ellipsoid reduces to a spheroid (ellipsoid of revolution), and the two optic axes merge, so that the medium is said to be uniaxial. As the index ellipsoid reduces to a spheroid, the two-sheeted index surface constructed therefrom reduces to a sphere and a spheroid touching at opposite ends of their common axis, which is parallel to that of the index ellipsoid; but the principal axes of the spheroidal index ellipsoid and the spheroidal sheet of the index surface are interchanged. In the well-known case of calcite, for example, the index ellipsoid is an oblate spheroid, so that one sheet of the index surface is a sphere touching that oblate spheroid at the equator, while the other sheet of the index surface is a prolate spheroid touching the sphere at the poles, with an equatorial radius (extraordinary index) equal to the polar radius of the oblate spheroidal index ellipsoid.
If all three principal semi-axes of the index ellipsoid are equal, it reduces to a sphere: all diametral sections of the index ellipsoid are circular, whence all polarizations are permitted for all directions of propagation, with the same refractive index for all directions, and the index surface merges with the (spherical) index ellipsoid; in short, the medium is optically isotropic. Cubic crystals exhibit this property as well as amorphous transparent media such as glass and water.
History
A surface analogous to the index ellipsoid can be defined for the wave speed (normal to the wavefront) instead of the refractive index. Let denote the length of the radius vector from the origin to a general point on the index ellipsoid. Then dividing equation () by gives
where , , and are the direction cosines of the radius vector. But is also the refractive index for a wavefront parallel to a diametral section of which the radius vector is major or minor semiaxis. If that wavefront has speed , we have , where is the speed of light in vacuum. For the principal semiaxes of the index ellipsoid, for which takes the values let take the values respectively, so that and . Making these substitutions in () and canceling the common factor , we obtain
This equation was derived by Augustin-Jean Fresnel in January 1822. If is the length of the radius vector, the equation describes a surface with the property that the major and minor semiaxes of any diametral section have lengths equal to the wave-normal speeds of wavefronts parallel to that section, and the directions of what Fresnel called the "vibrations" (which we now recognize as oscillations of ).
Whereas the surface described by () is in index space (in which the coordinates are dimensionless numbers), the surface described by () is in velocity space (in which the coordinates have the units of velocity). Whereas the former surface is of the 2nd degree, the latter is of the 4th degree, as may be verified by redefining as the components of velocity and putting , etc.; thus the latter surface () is generally not an ellipsoid, but another sort of ovaloid. And as the index ellipsoid generates the index surface, so the surface (), by the same process, generates what we call the normal-velocity surface. Hence the surface () might reasonably be called the "normal-velocity ovaloid". Fresnel, however, called it the surface of elasticity, because he derived it by supposing that light waves were transverse elastic waves, that the medium had three perpendicular directions in which a displacement of a molecule produced a restoring force in exactly the opposite direction, and that the restoring force due to a vector sum of displacements was the vector sum of the restoring forces due to the separate displacements.
Fresnel soon realized that the ellipsoid constructed on the same principal semi-axes as the surface of elasticity has the same relation to the ray velocities that the surface of elasticity has to the wave-normal velocities. Fresnel's ellipsoid is now called the ray ellipsoid. Thus, in modern terms, the ray ellipsoid generates the ray velocities as the index ellipsoid generates the refractive indices. The major and minor semiaxes of the diametral section of the ray ellipsoid are in the permitted directions of the electric field vector .
The term index surface was coined by James MacCullagh in 1837. In a previous paper, read in 1833, MacCullagh had called this surface the "surface of refraction" and shown that it is generated by the major and minor semiaxes of a diametral section of an ellipsoid which has principal semiaxes inversely proportional to those of Fresnel's ellipsoid, and which MacCullagh later called the "ellipsoid of indices". In 1891, Lazarus Fletcher called this ellipsoid the optical indicatrix.
Electromagnetic interpretation
Deriving the index ellipsoid and its generating property from electromagnetic theory is non-trivial. Given the index ellipsoid, however, we can easily relate its parameters to the electromagnetic properties of the medium.
The speed of light in vacuum is
where and are respectively the magnetic permeability and the electric permittivity of the vacuum. For a transparent material medium, we can still reasonably assume that the magnetic permeability is (especially at optical frequencies), but must be replaced by
where
is the relative permittivity (also called the dielectric constant), so that the wave speed becomes
Dividing by , we obtain the refractive index:
This derivation treats as a scalar, which is valid in an isotropic medium. In an anisotropic medium, the result holds only for those combinations of propagation direction and polarization which avoid the anisotropy—that is, for those cases in which the electric displacement vector is parallel to the electric field vector , as in an isotropic medium. In view of the symmetry of the index ellipsoid, these must be the cases in which is in the direction of one of the axes. So, denoting the relative permittivities in the , , and directions by (the so-called principal dielectric constants), and recalling that denote the refractive indices for these directions of , we must have
indicating that the semiaxes of the index ellipsoid are the square roots of the principal dielectric constants. Substituting these expressions into (), we obtain the equation of the index ellipsoid in the alternative form
which explains why it is also called the dielectric ellipsoid.
See also
Birefringence
Complex refractive index
Crystal optics
D-DIA
Mathematical descriptions of opacity
Notes
References
Bibliography
M. Born and E. Wolf, 2002, Principles of Optics, 7th Ed., Cambridge University Press, 1999 (reprinted with corrections, 2002), .
A. Fresnel (ed. H. de Senarmont, E. Verdet, and L. Fresnel), 1868, Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol. 2 (1868).
F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, .
L.D. Landau and E.M. Lifshitz (tr. J.B. Sykes & J.S. Bell), 1960, Electrodynamics of Continuous Media (vol. 8 of Course of Theoretical Physics), London: Pergamon Press.
A. Yariv and P. Yeh, 1984, Optical Waves in Crystals: Propagation and control of laser radiation, New York: Wiley, .
F. Zernike and J.E. Midwinter, 1973, Applied Nonlinear Optics, New York: Wiley (reprinted Mineola, NY: Dover, 2006).
Optics
Physical optics
Polarization (waves)
Surfaces
Optical mineralogy | Index ellipsoid | [
"Physics",
"Chemistry"
] | 2,348 | [
"Applied and interdisciplinary physics",
"Optics",
"Astrophysics",
" molecular",
"Atomic",
"Polarization (waves)",
" and optical physics"
] |
11,563,568 | https://en.wikipedia.org/wiki/Reactor-grade%20plutonium | Reactor-grade plutonium (RGPu) is the isotopic grade of plutonium that is found in spent nuclear fuel after the uranium-235 primary fuel that a nuclear power reactor uses has burnt up. The uranium-238 from which most of the plutonium isotopes derive by neutron capture is found along with the U-235 in the low enriched uranium fuel of civilian reactors.
In contrast to the low burnup of weeks or months that is commonly required to produce weapons-grade plutonium (WGPu/239Pu), the long time in the reactor that produces reactor-grade plutonium leads to transmutation of much of the fissile, relatively long half-life isotope 239Pu into a number of other isotopes of plutonium that are less fissile or more radioactive. When absorbs a neutron, it does not always undergo nuclear fission. Sometimes neutron absorption will instead produce at the neutron temperatures and fuel compositions present in typical light water reactors, with the concentration of steadily rising with longer irradiation, producing lower and lower grade plutonium as time goes on.
Generation II thermal-neutron reactors (today's most numerous nuclear power stations) can reuse reactor-grade plutonium only to a limited degree as MOX fuel, and only for a second cycle. Fast-neutron reactors, of which there are a handful operating today with a half dozen under construction, can use reactor-grade plutonium fuel as a means to reduce the transuranium content of spent nuclear fuel/nuclear waste. Russia has also produced a new type of Remix fuel that directly recycles reactor grade plutonium at 1% or less concentration into fresh or re-enriched uranium fuel imitating the 1% plutonium level of high-burnup fuel.
Classification by isotopic composition
At the beginning of the industrial scale production of plutonium-239 in war era production reactors, trace contamination or co-production with plutonium-240 was initially observed, with these trace amounts resulting in the dropping of the Thin Man weapon-design as unworkable. The difference in purity, of how much, continues to be important in assessing significance in the context of nuclear proliferation and weapons-usability.
The DOE definition of reactor grade plutonium changed in 1976. Before this, three grades were recognised. The change in the definition for reactor grade, from describing plutonium with greater than 7% Pu-240 content prior to 1976, to reactor grade being defined as containing 19% or more Pu-240, coincides with the 1977 release of information about a 1962 "reactor grade nuclear test". The question of which definition or designation applies, that, of the old or new scheme, to the 1962 "reactor-grade" test, has not been officially disclosed.
Super weapons grade, less than 3% Pu-240,
Weapons grade, less than 7% Pu-240 and
Reactor grade, 7% or more Pu-240.
From 1976, four grades were recognised:
Super weapons grade, less than 3% Pu-240
Weapons grade, less than 7% Pu-240,
Fuel grade, 7% to 19% Pu-240 and
Reactor grade, more than 19% Pu-240.
Reprocessing or recycling of the spent fuel from the most common class of civilian-electricity-generating or power reactor design, the LWR, (with examples being the PWR or BWR) recovers reactor grade plutonium (as defined since 1976), not fuel grade.
The physical mixture of isotopes in reactor-grade plutonium make it extremely difficult to handle and form and therefore explains its undesirability as a weapon-making substance, in contrast to weapons grade plutonium, which can be handled relatively safely with thick gloves.
To produce weapons grade plutonium, the uranium nuclear fuel must spend no longer than several weeks in the reactor core before being removed, creating a low fuel burnup. For this to be carried out in a pressurized water reactor - the most common reactor design for electricity generation - the reactor would have to prematurely reach cold shut down after only recently being fueled, meaning that the reactor would need to cool decay heat and then have its reactor pressure vessel be depressurized, followed by a fuel rod defueling. If such an operation were to be conducted, it would be easily detectable, and require prohibitively costly reactor modifications.
One example of how this process could be detected in PWRs, is that during these periods, there would be a considerable amount of down time, that is, large stretches of time that the reactor is not producing electricity to the grid. On the other hand, the modern definition of "reactor grade" plutonium is produced only when the reactor is run at high burnups and therefore producing a high electricity generating capacity factor. According to the US Energy Information Administration (EIA), in 2009 the capacity factor of US nuclear power stations was higher than all other forms of energy generation, with nuclear reactors producing power approximately 90.3% of the time and Coal thermal power plants at 63.8%, with down times being for simple routine maintenance and refuelling.
The degree to which typical Generation II reactor high burn-up produced reactor-grade plutonium is less useful than weapons-grade plutonium for building nuclear weapons is somewhat debated, with many sources arguing that the maximum probable theoretical yield would be bordering on a fizzle explosion of the range 0.1 to 2 kiloton in a Fat Man type device. As computations state that the energy yield of a nuclear explosive decreases by one and two orders of magnitude if the 240 Pu content increases from 5% (nearly weapons-grade plutonium) to 15%( 2 kt) and 25%,(0.2 kt) respectively. These computations are theoretical and assume the non-trivial issue of dealing with the heat generation from the higher content of non-weapons usable Pu-238 could be overcome.) As the premature initiation from the spontaneous fission of Pu-240 would ensure a low explosive yield in such a device, the surmounting of both issues in the construction of an Improvised nuclear device is described as presenting "daunting" hurdles for a Fat Man-era implosion design, and the possibility of terrorists achieving this fizzle yield being regarded as an "overblown" apprehension with the safeguards that are in place.
Others disagree on theoretical grounds and state that while they would not be suitable for stockpiling or being emplaced on a missile for long periods of time, dependably high non-fizzle level yields can be achieved, arguing that it would be "relatively easy" for a well funded entity with access to fusion boosting tritium and expertise to overcome the problem of pre-detonation created by the presence of Pu-240, and that a remote manipulation facility could be utilized in the assembly of the highly radioactive gamma ray emitting bomb components, coupled with a means of cooling the weapon pit during storage to prevent the plutonium charge contained in the pit from melting, and a design that kept the implosion mechanisms high explosives from being degraded by the pit's heat. However, with all these major design considerations included, this fusion boosted reactor grade plutonium primary will still fizzle if the fission component of the primary does not deliver more than 0.2 kilotons of yield, which is regarded as the minimum energy necessary to start a fusion burn. The probability that a fission device would fail to achieve this threshold yield increases as the burnup value of the fuel increases.
No information available in the public domain suggests that any well funded entity has ever seriously pursued creating a nuclear weapon with an isotopic composition similar to modern, high burnup, reactor grade plutonium. All nuclear weapon states have taken the more conventional path to nuclear weapons by either uranium enrichment or producing low burnup, "fuel-grade" and weapons-grade plutonium, in reactors capable of operating as production reactors, the isotopic content of reactor-grade plutonium, created by the most common commercial power reactor design, the pressurized water reactor, never directly being considered for weapons use.
As of April 2012, there were thirty-one countries that have civil nuclear power plants, of which nine have nuclear weapons, and almost every nuclear weapons state began producing weapons first instead of commercial nuclear power plants. The re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-proliferation treaty.
As nuclear reactor designs come in a wide variety and are sometimes improved over time, the isotopic ratio of what is deemed "reactor grade plutonium" in one design, as it compares to another, can differ substantially. For example, the British Magnox reactor, a Generation I gas cooled reactor(GCR) design, can rarely produce a fuel burnup of more than 2-5 GWd/tU. Therefore, the "reactor grade plutonium" and the purity of Pu-239 from discharged magnox reactors is approximately 80%, depending on the burn up value. In contrast, the generic civilian Pressurized water reactor, routinely does (typical for 2015 Generation II reactor) 45 GWd/tU of burnup, resulting in the purity of Pu-239 being 50.5%, alongside a Pu-240 content of 25.2%, The remaining portion includes much more of the heat generating Pu-238 and Pu-241 isotopes than are to be found in the "reactor grade plutonium" from a Magnox reactor.
"Reactor-grade" plutonium nuclear tests
The reactor grade plutonium nuclear test was a "low-yield (under 20 kilotons)" underground nuclear test using non-weapons-grade plutonium conducted at the US Nevada Test Site in 1962. Some information regarding this test was declassified in July 1977, under instructions from President Jimmy Carter, as background to his decision to prohibit nuclear reprocessing in the US.
The plutonium used for the 1962 test device was produced by the United Kingdom, and provided to the US under the 1958 US-UK Mutual Defence Agreement.
The initial codename for the Magnox reactor design amongst the government agency which mandated it, the UKAEA, was the Pressurised Pile Producing Power and Plutonium (PIPPA) and as this codename suggests, the reactor was designed as both a power plant and, when operated with low fuel "burn-up"; as a producer of plutonium-239 for the nascent nuclear weapons program in Britain. This intentional dual-use approach to building electric power-reactors that could operate as production reactors in the early Cold War era, was typical of many nations' Generation I reactors. With these being designs all focused on giving access to fuel after a short burn-up, which is known as Online refuelling.
The 2006 North Korean nuclear test, the first by the DPRK, is also said to have had a Magnox reactor as the root source of its plutonium, operated in Yongbyon Nuclear Scientific Research Center in North Korea. This test detonation resulted in the creation of a low-yield fizzle explosion, producing an estimated yield of approximately 0.48 kilotons, from an undisclosed isotopic composition. The 2009 North Korean nuclear test likewise was based on plutonium. Both produced a yield of 0.48 to 2.3 kiloton of TNT equivalent respectively and both were described as fizzle events due to their low yield, with some commentators even speculating whether, at the lower yield estimates for the 2006 test, the blast may have been the equivalent of US$100,000 worth of ammonium nitrate.
The isotopic composition of the 1962 US-UK test has similarly not been disclosed, other than the description reactor grade, and it has not been disclosed which definition was used in describing the material for this test as reactor grade. According to Alexander DeVolpi, the isotopic composition of the plutonium used in the US-UK 1962 test could not have been what we now consider to be reactor-grade, and the DOE now implies, but doesn't assert, that the plutonium was fuel grade. Likewise, the World Nuclear Association suggests that the US-UK 1962 test had at least 85% plutonium-239, a much higher isotopic concentration than what is typically present in the spent fuel from the majority of operating civilian reactors.
In 2002 former Deputy Director General of the IAEA, Bruno Pelaud, stated that the DoE statement was misleading and that the test would have the modern definition of fuel-grade with a Pu-240 content of only 12%
In 1997 political analyst Matthew Bunn and presidential technology advisor John Holdren, both of the Belfer Center for Science and International Affairs, cited a 1990s official U.S. assessment of programmatic alternatives for plutonium disposition. While it does not specify which RGPu definition is being referred to, it nonetheless states that "reactor-grade plutonium (with an unspecified isotopic composition) can be used to produce nuclear weapons at all levels of technical sophistication," and "advanced nuclear weapon states such as the United States and Russia, using modern designs, could produce weapons from "reactor-grade plutonium" having reliable explosive yields, weight, and other characteristics generally comparable to those of weapons made from weapon-grade plutonium"
In a 2008 paper, Kessler et al. used a thermal analysis to conclude that a hypothetical nuclear explosive device was "technically unfeasible" using reactor grade plutonium from a reactor that had a burn up value of 30 GWd/t using "low technology" designs akin to Fat Man with spherical explosive lenses, or 55 GWd/t for "medium technology" designs.
According to the Kessler et al. criteria, "high-technology" hypothetical nuclear explosive devices (HNEDs), that could be produced by the experienced nuclear weapons states (NWSs) would be technically unfeasible with reactor-grade plutonium containing more than approximately 9% of the heat generating Pu-238 isotope.
Typical isotopic composition of reactor grade plutonium
The British Magnox reactor, a Generation I gas cooled reactor(GCR) design, can rarely produce a fuel burnup of more than 2-5 GWd/tU. The Magnox reactor design was codenamed PIPPA (Pressurised Pile Producing Power and Plutonium) by the UKAEA to denote the plant's dual commercial (power reactor) and military (production reactor) role. The purity of Pu-239 from discharged magnox reactors is approximately 80%, depending on the burn up value.
In contrast, for example, a generic civilian Pressurized water reactor's spent nuclear fuel isotopic composition, following a typical Generation II reactor 45 GWd/tU of burnup, is 1.11% plutonium, of which 0.56% is Pu-239, and 0.28% is Pu-240, which corresponds to a Pu-239 content of 50.5% and a Pu-240 content of 25.2%. For a lower generic burn-up rate of 43,000 MWd/t, as published in 1989, the plutonium-239 content was 53% of all plutonium isotopes in the reactor spent nuclear fuel.
The US NRC has stated that the commercial fleet of LWRs presently powering homes, had an average burnup of approximately 35 GWd/MTU in 1995, while in 2015, the average had improved to 45 GWd/MTU.
The odd numbered fissile plutonium isotopes present in spent nuclear fuel, such as Pu-239, decrease significantly as a percentage of the total composition of all plutonium isotopes (which was 1.11% in the first example above) as higher and higher burnups take place, while the even numbered non-fissile plutonium isotopes (e.g. Pu-238, Pu-240 and Pu-242) increasingly accumulate in the fuel over time.
As power reactor technology develops, one goal is to reduce the spent nuclear fuel volume by increasing fuel efficiency and simultaneously reducing down times as much as possible to increase the economic viability of electricity generated from fission-electric stations. To this end, the reactors in the U.S. have doubled their average burn-up rates from 20 to 25 GWd/MTU in the 1970s to over 45 GWd/MTU in the 2000s. Generation III reactors under construction have a designed-for burnup rate in the 60 GWd/tU range and a need to refuel once every 2 years or so. For example, the European Pressurized Reactor has a designed-for 65 GWd/t, and the AP1000 has a designed for average discharge burnup of 52.8 GWd/t and a maximum of 59.5 GWd/t. In-design generation IV reactors will have burnup rates yet higher still.
Reuse in reactors
Today's moderated/thermal reactors primarily run on the once-through fuel cycle though they can reuse once-through reactor-grade plutonium to a limited degree in the form of mixed-oxide or MOX fuel, which is a routine commercial practice in most countries outside the US as it increases the sustainability of nuclear fission and lowers the volume of high level nuclear waste.
One third of the energy/fissions at the end of the practical fuel life in a thermal reactor are from plutonium, the end of cycle occurs when the U-235 percentage drops, the primary fuel that drives the neutron economy inside the reactor and the drop necessitates fresh fuel being required, so without design change, one third of the fissile fuel in a new fuel load can be fissile reactor-grade plutonium with one third less of Low enriched uranium needing to be added to continue the chain reactions anew, thus achieving a partial recycling.
A typical 5.3% reactor-grade plutonium MOX fuel bundle, is transmutated when it itself is again burnt, a practice that is typical in French thermal reactors, to a twice-through reactor-grade plutonium, with an isotopic composition of 40.8% and 30.6% at the end of cycle (EOC). MOX grade plutonium (MGPu) is generally defined as having more than 30% .
A limitation in the number of recycles exists within thermal reactors, as opposed to the situation in fast reactors, as in the thermal neutron spectrum only the odd-mass isotopes of plutonium are fissile, the even-mass isotopes thus accumulate, in all high thermal-spectrum burnup scenarios. Plutonium-240, an even-mass isotope is, within the thermal neutron spectrum, a fertile material like uranium-238, becoming fissile plutonium-241 on neutron capture; however, the even-mass plutonium-242 not only has a low neutron capture cross section within the thermal spectrum, it also requires 3 neutron captures before becoming a fissile nuclide.
While most thermal neutron reactors must limit MOX fuel to less than half of the total fuel load for nuclear stability reasons, due to the reactor design operating within the limitations of a thermal spectrum of neutrons, Fast neutron reactors on the other hand can use plutonium of any isotopic composition, operate on completely recycled plutonium and in the fast "burner" mode, or fuel cycle, fission and thereby eliminate all the plutonium present in the world stockpile of once-through spent fuel. The modernized IFR design, known as the S-PRISM concept and the Stable salt reactor concept, are two such fast reactors that are proposed to burn-up/eliminate the plutonium stockpiles in Britain that was produced from operating its fleet of Magnox reactors generating the largest civilian stockpile of fuel-grade/"reactor-grade plutonium" in the world.
In Bathke's equation on "attractiveness level" of Weapons-grade nuclear material, the Figure of Merit(FOM) the calculation generates, returns the suggestion that Sodium Fast Breeder Reactors are unlikely to reach the desired level of proliferation resistance, while Molten Salt breeder reactors are more likely to do so.
In the fast breeder reactor cycle, or fast breeder mode, as opposed to the fast-burner, the French Phénix reactor uniquely demonstrated multi-recycling and reuses of its reactor grade plutonium. Similar reactor concepts and fuel cycling, with the most well known being the Integral Fast Reactor are regarded as one of the few that can realistically achieve "planetary scale sustainability", powering a world of 10 billion, whilst still retaining a small environmental footprint. In breeder mode, fast reactors are therefore often proposed as a form of renewable or sustainable nuclear energy. Though the "[reactor-grade]plutonium economy" it would generate, presently returns social distaste and varied arguments about proliferation-potential, in the public mindset.
As is typically found in civilian European thermal reactors, a 5.3% plutonium MOX fuel-bundle, produced by conventional wet-chemical/PUREX reprocessing of an initial fuel assembly that generated 33 GWd/t before becoming spent nuclear fuel, creates, when it itself is burnt in the thermal reactor, a spent nuclear fuel with a plutonium isotopic composition of 40.8% and 30.6% .
Computations state that the energy yield of a nuclear explosive decreases by two orders of magnitude if the content increases to 25%,(0.2 kt).
Reprocessing, which mainly takes the form of recycling reactor-grade plutonium back into the same or a more advanced fleet of reactors, was planned in the US in the 1960s. At that time the uranium market was anticipated to become crowded and supplies tight so together with recycling fuel, the more efficient fast breeder reactors were thereby seen as immediately needed to efficiently use the limited known uranium supplies. This became less urgent as time passed, with both reduced demand forecasts and increased uranium ore discoveries, for these economic reasons, fresh fuel and the reliance on solely fresh fuel remained cheaper in commercial terms than recycled.
In 1977 the Carter administration placed a ban on reprocessing spent fuel, in an effort to set an international example, as within the US, there is the perception that it would lead to nuclear weapons proliferation. This decision has remained controversial and is viewed by many US physicists and engineers as fundamentally in error, having cost the US taxpayer and the fund generated by US reactor utility operators, with cancelled programs and the over 1 billion dollar investment into the proposed alternative, that of Yucca Mountain nuclear waste repository ending in protests, lawsuits and repeated stop-and-go decisions depending on the opinions of new incoming presidents.
As the "undesirable" contaminant from a weapons manufacturing viewpoint, , decays faster than the , with half lives of 6500 and 24,000 years respectively, the quality of the plutonium grade, increases with time (although its total quantity decreases during that time as well). Thus, physicists and engineers have pointed out, as hundreds/thousands of years pass, the alternative to fast reactor "burning" or recycling of the plutonium from the world fleet of reactors until it is all burnt up, the alternative to burning most frequently proposed, that of deep geological repository, such as Onkalo spent nuclear fuel repository, have the potential to become "plutonium mines", from which weapons-grade material for nuclear weapons could be acquired by simple PUREX extraction, in the centuries-to-millennia to come.
Nuclear terrorism target
Aum Shinrikyo, who succeeded in developing Sarin and VX nerve gas is regarded to have lacked the technical expertise to develop, or steal, a nuclear weapon. Similarly, Al Qaeda was exposed to numerous scams involving the sale of radiological waste and other non-weapons-grade material. The RAND corporation suggested that their repeated experience of failure and being scammed has possibly led to terrorists concluding that nuclear acquisition is too difficult and too costly to be worth pursuing.
See also
Uranium hydride bombs - produced a yield of about 0.2 kiloton
References
External links
Reactor-Grade Plutonium Can be Used to Make Powerful and Reliable Nuclear Weapons, FAS, Richard Garwin, CFR, Congressional testimony, 1998
Reactor-Grade and Weapons-Grade Plutonium in Nuclear Explosives, Canadian Coalition for Nuclear Responsibility
Nuclear weapons and power-reactor plutonium, Amory B. Lovins, February 28, 1980, Nature, Vol. 283, No. 5750, pp. 817–823
Additional Information Concerning Underground Nuclear Weapon Test of Reactor-Grade Plutonium
Why You Can’t Build a Bomb From Spent Fuel
Plutonium Isotopics - Non-Proliferation And Safeguards Issues
Plutonium as an Energy Source by Arjun Makhijani, Institute for Energy and Environmental Research
Nuclear weapons
Nuclear materials
Plutonium
American nuclear weapons testing | Reactor-grade plutonium | [
"Physics"
] | 5,076 | [
"Materials",
"Nuclear materials",
"Matter"
] |
11,564,906 | https://en.wikipedia.org/wiki/Luopan | The luopan or geomantic compass is a Chinese magnetic compass, also known as a feng shui compass. It is used by a feng shui practitioner to determine the precise direction of a structure, place or item. Luo Pan contains a lot of information and formulas regarding its functions. The needle points towards the south magnetic pole.
Form and function
Like a conventional compass, a luopan is a direction finder. However, a luopan differs from a compass in several important ways. The most obvious difference is the feng shui formulas embedded in up to 40 concentric rings on the surface. This is a metal or wooden plate known as the heaven dial. The circular metal or wooden plate typically sits on a wooden base known as the earth plate. The heaven dial rotates freely on the earth plate.
A red wire or thread that crosses the earth plate and heaven dial at 90-degree angles is the Heaven Center Cross Line, or Red Cross Grid Line. This line is used to find the direction and note position on the rings.
A conventional compass has markings for four or eight directions, while a luopan typically contains markings for 24 directions. This translates to 15 degrees per direction. The Sun takes approximately 15.2 days to traverse a solar term, a series of 24 points on the ecliptic. Since there are 360 degrees on the luopan and approximately 365.25 days in a mean solar year, each degree on a luopan approximates a terrestrial day.
Unlike a typical compass, a luopan does not point to the north magnetic pole of Earth. The needle of a luopan points to the south magnetic pole (it does not point to the geographic South Pole). The Chinese word for compass, 指南針 (zhǐnánzhēn in Mandarin), translates to “south-pointing needle.”
Types
Since the Ming and Qing dynasties, three types of luopan have been popular. They have some formula rings in common, such as the 24 directions and the early and later heaven arrangements.
San He
This luopan was said to have been used in the Tang dynasty. The San He contains three basic 24-direction rings. Each ring relates to a different method and formula. (The techniques grouped under the name "Three Harmonies" are the San He methods.)
San Yuan
This luopan, also known as the jiang pan (after Jiang Da Hong) or the Yi Pan (because of the presence of Yijing hexagrams) incorporates many formulas used in San Yuan (Three Cycles). It contains one 24-direction ring, known as the Earth Plate Correct Needle, the ring for the 64 hexagrams, and others. (The techniques grouped under the name "Flying Stars" are an example of San Yuan methods.)
Zong He
This luopan combines rings from the San He and San Yuan. It contains three 24-direction-rings and the 64 trigrams ring.
Other types
Each feng shui master may design a luopan to suit preference and to offer students. Some designs incorporate the bagua (trigram) numbers, directions from the Eight Mansions () methods, and English equivalents.
History and development
The luopan is an image of the cosmos (a world model) based on tortoise plastrons used in divination. At its most basic level it serves as a means to assign proper positions in time and space, like the Ming Tang (Hall of Light). The markings are similar to those on a liubo board.
The oldest precursors of the luopan are the or , meaning astrolabe or diviner's board—also sometimes called liuren astrolabes—unearthed from tombs that date between 278 BCE and 209 BCE. These astrolabes consist of a lacquered, two-sided board with astronomical sightlines. Along with divination for Da Liu Ren, the boards were commonly used to chart the motion of Taiyi through the nine palaces. The markings are virtually unchanged from the shi to the first magnetic compasses. The schematic of earth plate, heaven plate, and grid lines is part of the "two cords and four hooks" () geometrical diagram in use since at least the Warring States period. The zhinan zhen or south-pointing needle, is the original magnetic compass, and was developed for feng shui. It featured the two cords and four hooks diagram, direction markers, and a magnetized spoon in the center.
See also
Automatic writing
Chu Silk Manuscript
Dowsing
Geomancy
References
Bibliography
Further reading
An account of the various types of luo pan, and details of 75 separate rings.
The Lowdown on the Luo pan - Feng Shui for Modern Living Magazine
Orientation (geometry)
Chinese inventions
Magnetic devices
Geomancy
Feng Shui | Luopan | [
"Physics",
"Mathematics"
] | 982 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
11,568,343 | https://en.wikipedia.org/wiki/Shutdown%20valve | A shutdown valve (also referred to as SDV or emergency shutdown valve, ESV, ESD, or ESDV; or safety shutoff valve) is an actuated valve designed to stop the flow of a hazardous fluid upon the detection of a dangerous event. This provides protection against possible harm to people, equipment or the environment.
Shutdown valves form part of a safety instrumented system. The process of providing automated safety protection upon the detection of a hazardous event is called functional safety.
Shutdown valves are primarily associated with the petroleum industry although other industries may also require this type of protection system. ESD valves are required by law on any equipment placed on an offshore drilling rig to prevent catastrophic events like the BP Horizon explosion in the Gulf of Mexico in 2010.
A safety shutoff valve should be fail-safe, that is close upon failure of any element of the input control system (such as temperature controllers, steam pressure controllers), air pressure, fuel pressure, current from a flame detector, or current from other safety devices such as low water cutoff, and high pressure cutoff.
A blowdown valve (BDV) is a type of shutdown valve designed to depressurize a pressure vessel by directing vapour to a flare, vent or blowdown stack in an emergency. BDVs fail-safe to the open position upon failure of the control system. The type of valve, type of actuation and performance measurement are similar to an ESD valve.
Types
For fluids, metal seated ball valves are used as shut-down valves (SDV's). Use of metal seated ball valves leads to overall lower costs when taking into account lost production and inventory, and valve repair costs resulting from the use of soft seated ball valves which have a lower initial cost.
Straight-through flow valves, such as rotary-shaft ball valves, are typically high-recovery valves. High recovery valves are valves that lose little energy due to little flow turbulence. Flow paths are straight through. Rotary control valves, butterfly valve and ball valves are good examples.
For air intake shut down, two distinct types are commonly utilized, i.e. butterfly valves and swing gate or guillotine valves. Because diesel engines ignite fuel using compression instead of an electronic ignition, shutting off the fuel source to a diesel engine will not necessarily stop the engine from running. When an external hydrocarbon, such as methane gas, is present in the atmosphere, it can be sucked into a diesel engine causing overspeed or over revving, potentially leading to a catastrophic failure and explosion. When actuated, ESD valves stop the flow of air and prevent these failures.
Actuation
As shutdown valves form part of a SIS it is necessary to operate the valve by means of an actuator. These actuators are normally fail safe fluid power type. Typical examples of these are:
Pneumatic cylinder
Hydraulic cylinder
Electro-hydraulic actuator
In addition to the fluid type, actuators also vary in the manner in which the energy is stored to operate the valve on demand as follows:
single-acting cylinder - Or spring return where the energy is stored by means of a compressed spring
double-acting cylinder - Energy is stored using a volume of compressed fluid
The type of actuation required depends upon the application, site facilities and also the physical space available although the majority of actuators used for shutdown valves are of the spring return type due to the fail safe nature of spring return systems.
In a solenoid-operated safety shutoff valve, a spring action closes the valve instantly when an electric current fails and the solenoid ceases to be energized. The solenoid circuit is generally arranged so that it is broken upon failure of any element of the system. This valve cannot be re-opened until the solenoid is again energized. The coil of the valve solenoid must be connected in series with all of the elements. For this to operate, fuel, air and steam pressure can be converted to electrical signals by means of bellows, a bourdon tube, or a diaphragm-operated mercury switch.
Complications
However, sudden closing of a valve in a piping system may lead to water hammer or implosion so in special cases there may be additional items connected to the shutoff valve, such as a pressure relief valve or an aerator valve.
Performance measurement
For shutdown valves used in safety instrumented systems it is essential to know that the valve is capable of providing the required level of safety performance and that the valve will operate on demand. The required level of performance is dictated by the Safety Requirements Specification, where eventually a Safety Integrity Level (SIL) is indicated. In order to maintain the level of performance required during the valve lifetime, the Safety Maintenance Manual prescription shall be fulfilled: one of the possible requests is to test the valve. Among the others 2 types of testing methods are:
Proof test - A manual test that allows the operator to determine whether the valve is in the "as good as new" condition by testing for all possible failure modes and requires a plant shutdown
Diagnostic Test - An automated on-line test that will detect a percentage of the possible failure modes of the shutdown valve. An example of this for a shutdown valve would be a partial stroke test. An example of a mechanical partial stroke test device can be found here.
The performance standard of ESDVs may include the specification and testing of a closure time (e.g. to close in less than 10 seconds) and the specification and measurement of an acceptable leakage rate of fluid through the closed valve.
See also
Pipeline transport
Piping
Unit operation
External links
Standard EN161:2002
References
Valves
Safety engineering | Shutdown valve | [
"Physics",
"Chemistry",
"Engineering"
] | 1,164 | [
"Systems engineering",
"Safety engineering",
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
11,569,191 | https://en.wikipedia.org/wiki/Barcelona%20Convention | The Convention for the Protection of the Marine Environment and the Coastal Region of the Mediterranean, originally the Convention for Protection of the Mediterranean Sea against Pollution, and often simply referred to as the Barcelona Convention, is a regional convention adopted in 1976 to prevent and abate pollution from ships, aircraft and land based sources in the Mediterranean Sea. This includes but is not limited to dumping, run-off and discharges. Signers agreed to cooperate and assist in dealing with pollution emergencies, monitoring and scientific research. The convention was adopted on 16 February 1976 and amended on 10 June 1995.
The Barcelona Convention and its protocols form the legal framework of the Mediterranean Action Plan (approved in 1975), developed under the United Nations Environment Programme (UNEP) Regional Seas Programme.
Goals
The key goal of the convention is to "reduce pollution in the Mediterranean Sea and protect and improve the marine environment in the area, thereby contributing to its sustainable development". To achieve this a number of aims and commitments have been established.
Aims
To prevent, reduce, combat and, as far as possible, eliminate pollution in the Zone of the Mediterranean Sea.
To attain the objective of sustainable development, taking fully into account the recommendations of the Mediterranean Commission on Sustainable Development (MCSD), the advisory body established under Article 4 of the Convention.
To protect the environment and to contribute to sustainable development:
By applying the precautionary principle and that the polluter should pay
By performing Environmental Impact Assessments (EIA)
By promoting cooperation between coastal States in EIA procedures.
To promote the integrated management of coastal zones, taking into account the protection of zones of ecological and landscape interest and the rational use of natural resources. To apply the Convention and its Protocols:
By adopting programmes and measures with defined deadlines for completion.
By using the best techniques available and the best environmental practices.
To formulate and adopt Protocols that prescribe agreed measures, procedures and regulations to apply the convention.
To promote, within the relevant international bodies, measures relating to the application of sustainable development programmes and environmental protection, conservation and rehabilitation and the natural resources of the Mediterranean Sea.
Commitments
Members agreed to take specific measures:
against pollution due to dumping from ships and air planes
against pollution due to discharges from ships,
against pollution caused by prospection for, and exploitation of, the continental shelf, the seabed and its subsoil,
against land-based pollution,
to cooperate in pollution incidents giving rise to situations of emergency,
to protect biological diversity,
against pollution due to transboundary movements of dangerous wastes and to eliminate them,
to monitor pollution,
to cooperate in science and technology
to apply environmental legislation, and
to facilitate public access to information and public participation.
Status
Originally, fourteen states and the European Communities signed the Convention adopted in 1976. It came into effect on 12 February 1978. The amendments adopted in 1995 have yet to be ratified by Bosnia and Herzegovina. Parties are all countries with a Mediterranean shoreline as well as the European Union. NGOs with a stated interest and third-party governments are allowed observer status.
The convention is applicable to the 'Zone of the Mediterranean Sea'. This is defined as 'the maritime waters of the Mediterranean as such, with all its gulfs and tributary seas, bounded to the west by the Strait of Gibraltar and to the east by the Dardanelle Strait'. Parties are allowed to extend the application of the convention to the coastal areas within their own territory.
See also
Mediterranean Sea
Law of the sea
Specially Protected Areas of Mediterranean Importance
Protected areas
London Convention
References
External links
UNEP Regional Seas Programme – Barcelona Convention
UNEP Mediterranean Action Plan for the Barcelona Convention
UNEP – Governing Instruments of the Mediterranean
EU – Barcelona Convention for the Protection of the Mediterranean, acts and protocols, at EUR-Lex: Access to European Union Law
Environmental treaties
Environment of the Mediterranean
Ocean pollution
Treaties concluded in 1976
Treaties entered into force in 1978
1978 in the environment
1976 in Spain
Treaties of Spain
Treaties of France
Treaties of Italy
Treaties of Yugoslavia
Treaties of Greece
Treaties of Turkey
Treaties of Cyprus
Treaties of Israel
Treaties of Egypt
Treaties of the Libyan Arab Jamahiriya
Treaties of Tunisia
Treaties of Algeria
Treaties of Morocco
Treaties of Albania
Treaties of Serbia and Montenegro
Treaties of Montenegro
Treaties of Bosnia and Herzegovina
Treaties of Slovenia
Treaties of Croatia
Treaties entered into by the European Union | Barcelona Convention | [
"Chemistry",
"Environmental_science"
] | 858 | [
"Ocean pollution",
"Water pollution"
] |
11,569,714 | https://en.wikipedia.org/wiki/Blue%20sky%20catastrophe | The blue sky catastrophe is a form of orbital indeterminacy, and an element of bifurcation theory.
Orbital dynamics
Blue sky catastrophe is a type of bifurcation of a periodic orbit. In other words, it describes a sort of behaviour stable solutions of a set of differential equations can undergo as the equations are gradually changed. This type of bifurcation is characterised by both the period and length of the orbit approaching infinity as the control parameter approaches a finite bifurcation value, but with the orbit still remaining within a bounded part of the phase space, and without loss of stability before the bifurcation point. In other words, the orbit vanishes into the blue sky.
Applications of blue sky catastrophe in other fields
The bifurcation has found application in, amongst other places, slow-fast models of computational neuroscience. The possibility of the phenomenon was raised by David Ruelle and Floris Takens in 1971, and explored by R.L. Devaney and others in the following decade. More compelling analysis was not performed until the 1990s.
This bifurcation has also been found in the context of fluid dynamics, namely in double-diffusive convection of a small Prandtl number fluid. Double diffusive convection occurs when convection of the fluid is driven by both thermal and concentration gradients, and the temperature and concentration diffusivities take different values. The bifurcation is found in an orbit that is born in a global saddle-loop bifurcation, becomes chaotic in a period doubling cascade, and disappears in the blue sky catastrophe.
References
Blue Sky Catastrophe article in Scholarpedia
Andrey Shilnikov - studies the blue sky catastrophe and other topics in dynamical neuroscience.
E. Meca et al. Phys. Rev. Lett. 92, 234501 (2004) - Blue Sky Catastrophe in fluid dynamics.
Further reading
External links
Andrey Shilnikov - studies the blue sky catastrophe and other topics in dynamical neuroscience.
Bifurcation theory
Celestial mechanics | Blue sky catastrophe | [
"Physics",
"Mathematics"
] | 410 | [
"Bifurcation theory",
"Classical mechanics",
"Astrophysics",
"Celestial mechanics",
"Dynamical systems"
] |
11,570,222 | https://en.wikipedia.org/wiki/Client%E2%80%93queue%E2%80%93client | A client–queue–client or passive queue system is a client–server computer network in which the server is a data queue for the clients. Instead of communicating with each other directly, clients exchange data with one another by storing it in a repository (the queue) on a server.
Like peer-to-peer, the client–queue–client system empowers hosts on the network to serve data to other hosts.
Example
Web crawlers on different hosts need to query each other to synchronize an indexed URI. Whereas one approach is to program each crawler to receive and respond to such queries, the client–queue–client approach is to store the indexed content from both crawlers in a passive queue, such as a relational database, on another host. Both web crawlers read and write to the database, but never communicate with each other.
See also
Centralized computing
Decentralized computing
Friend-to-friend
Ontology (information science)
References
Network architecture | Client–queue–client | [
"Engineering"
] | 199 | [
"Network architecture",
"Computer networks engineering"
] |
11,570,732 | https://en.wikipedia.org/wiki/IRF3 | Interferon regulatory factor 3, also known as IRF3, is an interferon regulatory factor.
Function
IRF3 is a member of the interferon regulatory transcription factor (IRF) family. IRF3 was originally discovered as a homolog of IRF1 and IRF2. IRF3 has been further characterized and shown to contain several functional domains including a nuclear export signal, a DNA-binding domain, a C-terminal IRF association domain and several regulatory phosphorylation sites. IRF3 is found in an inactive cytoplasmic form that upon serine/threonine phosphorylation forms a complex with CREBBP. The complex translocates into the nucleus for the transcriptional activation of interferons alpha and beta, and further interferon-induced genes.
IRF3 plays an important role in the innate immune system's response to viral infection. Aggregated MAVS have been found to activate IRF3 dimerization. A 2015 study shows phosphorylation of innate immune adaptor proteins MAVS, STING and TRIF at a conserved pLxIS motif recruits and specifies IRF3 phosphorylation and activation by the Serine/threonine-protein kinase TBK1, thereby activating the production of type-I interferons. Another study has shown that IRF3-/- knockouts protect from myocardial infarction. The same study identified IRF3 and the type I IFN response as a potential therapeutic target for post-myocardial infarction cardioprotection.
Interactions
IRF3 has been shown to interact with IRF7.
References
Further reading
External links
Transcription factors | IRF3 | [
"Chemistry",
"Biology"
] | 357 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,572,876 | https://en.wikipedia.org/wiki/Kuwajima%20Taxol%20total%20synthesis | The Kuwajima Taxol total synthesis by the group of Isao Kuwajima of the Tokyo Institute of Technology is one of several efforts in taxol total synthesis published in the 1990s. The total synthesis of Taxol is considered a landmark in organic synthesis.
This synthesis is truly synthetic without any help from small biomolecule precursors and also a linear synthesis with molecule ring construction in the order of A, B, C, D. At some point chirality is locked into the molecule via an asymmetric synthesis step which is unique compared to the other efforts. In common with the other efforts the tail addition is based on the Ojima lactam.
The 20 carbon frame is constructed from several pieces: propargyl alcohol (C1, C2, C14), propionaldehyde (C13, C12, C18), isobutyric acid (C15, C16, C17, C11), Trimethyl(phenylthiomethyl)silane (C10), 2-bromobenzaldehyde (C3 to C9), diethylaluminum cyanide (C19) and trimethylsilylmethyl bromide (C20)
Synthesis A ring
Ring A synthesis (scheme 1) started by joining the THP protected propargyl alcohol 1.1 (the C2-C1-C14 fragment) and propionaldehyde 1.2 (fragment C13-C12-C18) in a nucleophilic addition with n-butyllithium to alcohol 1.3. The Lindlar catalyst then reduced the alkyne to the alkene in 1.4 and Swern oxidation converted the alcohol group to the enone group in 1.5. Fragment C11-C15-C16-C17 1.6 was then added as the lithium enolate of isobutyric acid ethyl ester in a conjugate addition to gamma keto ester 1.7. A Claisen condensation closed the ring to 1.8 and the intermediate enol is captured by pivaloyl chloride (piv) as a protective group. The THP group was removed with TsOH to 1.9 and the formed alcohol oxidized by Swern oxidation to aldehyde 1.10. The TIPS silyl enol ether 1.11 was formed by reaction with the triflate TIPSOtf and DBU in DMAP setting the stage for asymmetric dihydroxylation to hydroxyaldehyde 1.12. The piv protecting group was then replaced by a TIPS group in 1.14 after protecting the aldehyde as the aminal 1.13 and as this group is automatically lost on column chromatography, the step was repeated to aminal 1.15. The C10 fragment was then introduced by the lithium salt of Trimethyl(phenylthiomethyl)silane 1.16 in a Peterson olefination to the sulfide 1.17 followed by deprotection to completed ring A 1.18. The A ring is now complete with the aldehyde group and de sulfide group in place for anchoring with ring C forming ring B.
Synthesis B ring
The bottom part of ring B was constructed by nucleophilic addition to the aldehyde of 2.1 (scheme 2) with dibenzyl acetal of 2-bromobenzaldehyde 2.2 as its aryllithium. This step is much in common with the B ring synthesis in the Nicolaou Taxol total synthesis except that the aldehyde group is located at ring A and not ring B. The diol in 2.3 was protected as the boronic ester 2.4 preparing the molecule for upper part ring closure with tin tetrachloride to tricycle 2.5 in a Grob fragmentation-like reaction.
After deprotection (pinacol) to diol 2.6, DIBAL reduction to triol 2.7 and TBS reprotection (TBSOtf, lutidine) to alcohol 2.8 it was possible to remove the phenylsulfide group in with a tributyltin hydride and AIBN(see Barton-McCombie deoxygenation) to alcohol 2.9. Palladium on carbon hydrogenation removed the benzyl protecting group allowing the Swern oxidation of 2.10 to ketone 2.11
Synthesis C ring
Completion of the C ring required complete reduction of the arene, placement of para oxygen atoms and importantly introduction of the C19 methyl group. The first assault on the aromatic ring in 3.1 (scheme 3) was launched with Birch reduction (potassium, ammonia, tetrahydrofuran, -78 °C, then ethanol) to diene 3.2. Deprotection (TBAF) to diol 3.3, reprotection as the benzaldehyde acetal 3.4 and reduction (sodium borohydride) to alcohol 3.5 allowed the oxidation of the diene to the 1,4-butenediol 3.6. In this photochemical [4+2]cycloaddition, singlet oxygen was generated from oxygen and rose bengal and the intermediate peroxide was reduced with thiourea. The next order of business was introduction of the C19 fragment: the new diol group was protected as the PMP acetal 3.7 (PMP stands for p-methoxyphenyl) allowing the oxidation of the C4 alcohol to ketone 3.8 with the Dess-Martin periodinane. Diethylaluminum cyanide reacted in a conjugate addition to the enone group to nitrile 3.9. The enol was protected as the TBS ether 3.10 allowing for the reduction of the nitrile group first to the aldehyde with DIBAL and then on to the alcohol 3.11 with Lithium aluminium hydride. The alcohol group was replaced by bromine in an Appel reaction which caused an elimination reaction (loss of HBr) to cyclopropane 3.12. Treatment with hydrochloric acid formed ketone 3.13, reaction with Samarium(II) iodide gave ring-opening finally putting the C19 methyl group in place in 3.14 and deprotection (TBAF) and enol-ketone conversion gave hydroxyketone 3.15
Synthesis D ring
By protecting the diol group in triol 4.1 (scheme 4) as the phenyl boronic ester 4.2, the remaining alcohol group could be protected as the TBS ether 4.3. After deprotecting the diol group (hydrogen peroxide, sodium bicarbonate) again in 4.4 it was possible to oxidize the C19 alcohol to the ketone 4.5 with Dess-Martin periodinane. In a new round of protections the C7 alcohol was converted to the 2-methoxy-2-propyl (MOP) ether 4.6 with 2-propenylmethylether and PPTS and the C7 ketone was converted to its enolate 4.7 by reaction with KHMDS and N,N-bis(trifluoromethylsulfonyl)aniline. These preambles facilitated the introduction of the final missing C20 fragment as the Grignard reagent trimethylsilylmethylmagnesium bromide which coupled with the triflate in a tetrakis(triphenylphosphine)palladium(0) catalysed reaction to the silane 4.8. The trimethylsilyl group eliminated on addition of NCS to organochloride 4.9. Prior to ring-closing the D ring there was some unfinished business in ring C. A C10 alcohol was introduced by MoOPH oxidation to 4.10 but with the wrong stereochemistry. After acetylation to 4.11 and inversion of configuration with added base DBN this problem was remedied in compound 4.12. Next dihydroxylation with Osmium(VIII) oxide formed the diol 4.13 with the primary alcohol on addition of base DBU displacing the chlorine atom in a nucleophilic aliphatic substitution to oxetane 4.14.
Tail addition
The C1, C2 and C4 functional groups were put in place next and starting from oxetane 5.1 (scheme 5) the MOM protecting group is removed in 5.2 (PPTS) and replaced by a TES group TESCl) in 5.3. The acetal group was removed in 5.4 (hydrogenation PdOH2, H2) and replaced by a carbonate ester group in 5.5 (triphosgene, pyridine). The tertiary alcohol group was acetylated in 5.6 and in the final step the carbonate group was opened by reaction with phenyllithium to the hydroxyester 5.7.
Prior to tail addition the TES protective group was removed in 5.8 (hydrogen fluoride pyridine) and replaced by a TROC (trichloroethyl carbonate, TROCCl ) group in 5.9. The C13 alcohol protective group was removed in 5.10 (TASF) enabling the tail addition of Ojima lactam 5.11 (this step is common with all total synthetic efforts to date) to 5.12 with Lithium bis(trimethylsilyl)amide. The synthesis was completed with TROC removal (zinc, acetic acid) to taxol 5.13.
See also
Paclitaxel total synthesis
Danishefsky Taxol total synthesis
Holton Taxol total synthesis
Mukaiyama Taxol total synthesis
Nicolaou Taxol total synthesis
Wender Taxol total synthesis
External links
Kuwajima Taxol Synthesis @ SynArchive.com
References
Total synthesis
Taxanes | Kuwajima Taxol total synthesis | [
"Chemistry"
] | 2,100 | [
"Total synthesis",
"Chemical synthesis"
] |
7,851,422 | https://en.wikipedia.org/wiki/Single-strand%20conformation%20polymorphism | Single-strand conformation polymorphism (SSCP), or single-strand chain polymorphism, is defined as a conformational difference of single-stranded nucleotide sequences of identical length as induced by differences in the sequences under certain experimental conditions. This property allows sequences to be distinguished by means of gel electrophoresis, which separates fragments according to their different conformations.
Physical background
A single nucleotide change in a particular sequence, as seen in a double-stranded DNA, cannot be distinguished by gel electrophoresis techniques, which can be attributed to the fact that; the physical properties of the double strands are almost identical for both alleles. After denaturation, single-stranded DNA undergoes a characteristic 3-dimensional folding and may assume a unique conformational state based on its DNA sequence. The difference in shape between two single-stranded DNA strands with different sequences can cause them to migrate differently through an electrophoresis gel, even though the number of nucleotides is the same, which is, in fact, an application of SSCP.
Applications in molecular biology
SSCP used to be a way to discover new DNA polymorphisms apart from DNA sequencing but is now being supplanted by sequencing techniques on account of efficiency and accuracy. These days, SSCP is most applicable as a diagnostic tool in molecular biology. It can be used in genotyping to detect homozygous individuals of different allelic states, as well as heterozygous individuals that should each demonstrate distinct patterns in an electrophoresis experiment. SSCP is also widely used in virology to detect variations in different strains of a virus, the idea being that a particular virus particle present in both strains will have undergone changes due to mutation, and that these changes will cause the two particles to assume different conformations and, thus, be differentiable on an SSCP gel.
References
Molecular biology
Gene tests | Single-strand conformation polymorphism | [
"Chemistry",
"Biology"
] | 394 | [
"Biochemistry",
"Genetics techniques",
"Gene tests",
"Molecular biology"
] |
7,853,706 | https://en.wikipedia.org/wiki/Mathieu%20transformation | The Mathieu transformations make up a subgroup of canonical transformations preserving the differential form
The transformation is named after the French mathematician Émile Léonard Mathieu.
Details
In order to have this invariance, there should exist at least one relation between and only (without any involved).
where . When a Mathieu transformation becomes a Lagrange point transformation.
See also
Canonical transformation
References
Mechanics
Hamiltonian mechanics | Mathieu transformation | [
"Physics",
"Mathematics",
"Engineering"
] | 80 | [
"Classical mechanics stubs",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"Mechanical engineering",
"Dynamical systems"
] |
1,762,360 | https://en.wikipedia.org/wiki/Parabolic%20cylinder%20function | In mathematics, the parabolic cylinder functions are special functions defined as solutions to the differential equation
This equation is found when the technique of separation of variables is used on Laplace's equation when expressed in parabolic cylindrical coordinates.
The above equation may be brought into two distinct forms (A) and (B) by completing the square and rescaling , called H. F. Weber's equations:
and
If is a solution, then so are
If is a solution of equation (), then is a solution of (), and, by symmetry,
are also solutions of ().
Solutions
There are independent even and odd solutions of the form (). These are given by (following the notation of Abramowitz and Stegun (1965)):
and
where is the confluent hypergeometric function.
Other pairs of independent solutions may be formed from linear combinations of the above solutions. One such pair is based upon their behavior at infinity:
where
The function approaches zero for large values of and , while diverges for large values of positive real .
and
For half-integer values of a, these (that is, U and V) can be re-expressed in terms of Hermite polynomials; alternatively, they can also be expressed in terms of Bessel functions.
The functions U and V can also be related to the functions (a notation dating back to Whittaker (1902)) that are themselves sometimes called parabolic cylinder functions:
Function was introduced by Whittaker and Watson as a solution of eq.~() with bounded at . It can be expressed in terms of confluent hypergeometric functions as
Power series for this function have been obtained by Abadir (1993).
Parabolic Cylinder U(a,z) function
Integral representation
Integrals along the real line,
The fact that these integrals are solutions to equation () can be easily checked by direct substitution.
Derivative
Differentiating the integrals with respect to gives two expressions for ,
Adding the two gives another expression for the derivative,
Recurrence relation
Subtracting the first two expressions for the derivative gives the recurrence relation,
Asymptotic expansion
Expanding
in the integrand of the integral representation
gives the asymptotic expansion of ,
Power series
Expanding the integral representation in powers of gives
Values at z=0
From the power series one immediately gets
Parabolic cylinder Dν(z) function
Parabolic cylinder function is the solution to the Weber differential equation,
that is regular at with the asymptotics
It is thus given as and its properties then directly follow from those of the -function.
Integral representation
Asymptotic expansion
If is a non-negative integer this series terminates and turns into a polynomial, namely the Hermite polynomial,
Connection with quantum harmonic oscillator
Parabolic cylinder function appears naturally in the Schrödinger equation for the one-dimensional quantum harmonic oscillator (a quantum particle in the oscillator potential),
where
is the reduced Planck constant,
is the mass of the particle,
is the coordinate of the particle,
is the frequency of the oscillator,
is the energy,
and is the particle's wave-function. Indeed introducing the new quantities
turns the above equation into the Weber's equation for the function ,
References
Special hypergeometric functions
Special functions | Parabolic cylinder function | [
"Mathematics"
] | 680 | [
"Special functions",
"Combinatorics"
] |
1,762,418 | https://en.wikipedia.org/wiki/Potential%20energy%20surface | A potential energy surface (PES) or energy landscape describes the energy of a system, especially a collection of atoms, in terms of certain parameters, normally the positions of the atoms. The surface might define the energy as a function of one or more coordinates; if there is only one coordinate, the surface is called a potential energy curve or energy profile. An example is the Morse/Long-range potential.
It is helpful to use the analogy of a landscape: for a system with two degrees of freedom (e.g. two bond lengths), the value of the energy (analogy: the height of the land) is a function of two bond lengths (analogy: the coordinates of the position on the ground).
The PES concept finds application in fields such as physics, chemistry and biochemistry, especially in the theoretical sub-branches of these subjects. It can be used to theoretically explore properties of structures composed of atoms, for example, finding the minimum energy shape of a molecule or computing the rates of a chemical reaction. It can be used to describe all possible conformations of a molecular entity, or the spatial positions of interacting molecules in a system, or parameters and their corresponding energy levels, typically Gibbs free energy. Geometrically, the energy landscape is the graph of the energy function across the configuration space of the system. The term is also used more generally in geometric perspectives to mathematical optimization, when the domain of the loss function is the parameter space of some system.
Mathematical definition and computation
The geometry of a set of atoms can be described by a vector, , whose elements represent the atom positions. The vector could be the set of the Cartesian coordinates of the atoms, or could also be a set of inter-atomic distances and angles.
Given , the energy as a function of the positions, , is the value of for all of interest. Using the landscape analogy from the introduction, E gives the height on the "energy landscape" so that the concept of a potential energy surface arises.
To study a chemical reaction using the PES as a function of atomic positions, it is necessary to calculate the energy for every atomic arrangement of interest. Methods of calculating the energy of a particular atomic arrangement of atoms are well described in the computational chemistry article, and the emphasis here will be on finding approximations of to yield fine-grained energy-position information.
For very simple chemical systems or when simplifying approximations are made about inter-atomic interactions, it is sometimes possible to use an analytically derived expression for the energy as a function of the atomic positions. An example is the London-Eyring-Polanyi-Sato potential for the system H + H2 as a function of the three H-H distances.
For more complicated systems, calculation of the energy of a particular arrangement of atoms is often too computationally expensive for large scale representations of the surface to be feasible. For these systems a possible approach is to calculate only a reduced set of points on the PES and then use a computationally cheaper interpolation method, for example Shepard interpolation, to fill in the gaps.
Application
A PES is a conceptual tool for aiding the analysis of molecular geometry and chemical reaction dynamics. Once the necessary points are evaluated on a PES, the points can be classified according to the first and second derivatives of the energy with respect to position, which respectively are the gradient and the curvature. Stationary points (or points with a zero gradient) have physical meaning: energy minima correspond to physically stable chemical species and saddle points correspond to transition states, the highest energy point on the reaction coordinate (which is the lowest energy pathway connecting a chemical reactant to a chemical product).
The term is useful when examining protein folding; while a protein can theoretically exist in a nearly infinite number of conformations along its energy landscape, in reality proteins fold (or "relax") into secondary and tertiary structures that possess the lowest possible free energy. The key concept in the energy landscape approach to protein folding is the folding funnel hypothesis.
In catalysis, when designing new catalysts or refining existing ones, energy landscapes are considered to avoid low-energy or high-energy intermediates that could halt the reaction or demand excessive energy to reach the final products.
In glassing models, the local minima of an energy landscape correspond to metastable low temperature states of a thermodynamic system.
In machine learning, artificial neural networks may be analyzed using analogous approaches. For example, a neural network may be able to perfectly fit the training set, corresponding to a global minimum of zero loss, but overfitting the model ("learning the noise" or "memorizing the training set"). Understanding when this happens can be studied using the geometry of the corresponding energy landscape.
Attractive and repulsive surfaces
Potential energy surfaces for chemical reactions can be classified as attractive or repulsive by comparing the extensions of the bond lengths in the activated complex relative to those of the reactants and products. For a reaction of type A + B—C → A—B + C, the bond length extension for the newly formed A—B bond is defined as R*AB = RAB − R0AB, where RAB is the A—B bond length in the transition state and R0AB in the product molecule. Similarly for the bond which is broken in the reaction, R*BC = RBC − R0BC, where R0BC refers to the reactant molecule.
For exothermic reactions, a PES is classified as attractive (or early-downhill) if R*AB > R*BC, so that the transition state is reached while the reactants are approaching each other. After the transition state, the A—B bond length continues to decrease, so that much of the liberated reaction energy is converted into vibrational energy of the A—B bond. An example is the harpoon reaction K + Br2 → K—Br + Br, in which the initial long-range attraction of the reactants leads to an activated complex resembling K+•••Br−•••Br. The vibrationally excited populations of product molecules can be detected by infrared chemiluminescence.
In contrast the PES for the reaction H + Cl2 → HCl + Cl is repulsive (or late-downhill) because R*HCl < R*ClCl and the transition state is reached when the products are separating. For this reaction in which the atom A (here H) is lighter than B and C, the reaction energy is released primarily as translational kinetic energy of the products. For a reaction such as F + H2 → HF + H in which atom A is heavier than B and C, there is mixed energy release, both vibrational and translational, even though the PES is repulsive.
For endothermic reactions, the type of surface determines the type of energy which is most effective in bringing about reaction. Translational energy of the reactants is most effective at inducing reactions with an attractive surface, while vibrational excitation (to higher vibrational quantum number v) is more effective for reactions with a repulsive surface. As an example of the latter case, the reaction F + HCl(v=1) → Cl + HF is about five times faster than F + HCl(v=0) → Cl + HF for the same total energy of HCl.
History
The concept of a potential energy surface for chemical reactions was first suggested by the French physicist René Marcelin in 1913. The first semi-empirical calculation of a potential energy surface was proposed for the H + H2 reaction by Henry Eyring and Michael Polanyi in 1931. Eyring used potential energy surfaces to calculate reaction rate constants in the transition state theory in 1935.
H + H2 two-dimensional PES
Potential energy surfaces are commonly shown as three-dimensional graphs, but they can also be represented by two-dimensional graphs, in which the advancement of the reaction is plotted by the use of isoenergetic lines.
The collinear system H + H2 is a simple reaction that allows a two-dimension PES to be plotted in an easy and understandable way.
In this reaction, a hydrogen atom (H) reacts with a dihydrogen molecule (H2) by forming a new bond with one atom from the molecule, which in turn breaks the bond of the original molecule. This is symbolized as Ha + Hb–Hc → Ha–Hb + Hc. The progression of the reaction from reactants (H+H₂) to products (H-H-H), as well as the energy of the species that take part in the reaction, are well defined in the corresponding potential energy surface.
Energy profiles describe potential energy as a function of geometrical variables (PES in any dimension are independent of time and temperature).
We have different relevant elements in the 2-D PES:
The 2-D plot shows the minima points where we find reactants, the products and the saddle point or transition state.
The transition state is a maximum in the reaction coordinate and a minimum in the coordinate perpendicular to the reaction path.
The advance of time describes a trajectory in every reaction. Depending on the conditions of the reaction the process will show different ways to get to the product formation plotted between the 2 axes.
See also
Computational chemistry
Energy minimization (or geometry optimization)
Energy profile (chemistry)
Potential well
Reaction coordinate
References
Bibliographie
Quantum mechanics
Potential theory
Quantum chemistry | Potential energy surface | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,935 | [
"Functions and mappings",
"Quantum chemistry",
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Potential theory",
"Theoretical chemistry",
"Mathematical relations",
" molecular",
"Atomic",
" and optical physics"
] |
1,762,873 | https://en.wikipedia.org/wiki/Indium%20antimonide | Indium antimonide (InSb) is a crystalline compound made from the elements indium (In) and antimony (Sb). It is a narrow-gap semiconductor material from the III-V group used in infrared detectors, including thermal imaging cameras, FLIR systems, infrared homing missile guidance systems, and in infrared astronomy. Indium antimonide detectors are sensitive to infrared wavelengths between 1 and 5 μm.
Indium antimonide was a very common detector in the old, single-detector mechanically scanned thermal imaging systems. Another application is as a terahertz radiation source as it is a strong photo-Dember emitter.
History
The intermetallic compound was first reported by Liu and Peretti in 1951, who gave its homogeneity range, structure type, and lattice constant. Polycrystalline ingots of InSb were prepared by Heinrich Welker in 1952, although they were not very pure by today's semiconductor standards. Welker was interested in systematically studying the semiconducting properties of the III-V compounds. He noted how InSb appeared to have a small direct band gap and a very high electron mobility. InSb crystals have been grown by slow cooling from liquid melt at least since 1954.
In 2018, a research team at Delft University of Technology claimed that indium antimonide nanowires showed potential application in creating Majorana zero mode quasiparticles for use in quantum computing; Microsoft opened a laboratory at the university to further this research, however Delft later retracted the paper.
Physical properties
InSb has the appearance of dark-grey silvery metal pieces or powder with vitreous lustre. When subjected to temperatures over 500 °C, it melts and decomposes, liberating antimony and antimony oxide vapors.
The crystal structure is zincblende with a 0.648 nm lattice constant.
Electronic properties
InSb is a narrow direct band gap semiconductor with an energy band gap of 0.17 eV at 300 K and 0.23 eV at 80 K.
Undoped InSb possesses the largest ambient-temperature electron mobility of 78000 cm2/(V⋅s), electron drift velocity, and ballistic length (up to 0.7 μm at 300 K) of any known semiconductor, except for carbon nanotubes.
Indium antimonide photodiode detectors are photovoltaic, generating electric current when subjected to infrared radiation. InSb's internal quantum efficiency is effectively 100% but is a function of the thickness particularly for near bandedge photons. Like all narrow bandgap materials InSb detectors require periodic recalibrations, increasing the complexity of the imaging system. This added complexity is worthwhile where extreme sensitivity is required, e.g. in long-range military thermal imaging systems. InSb detectors also require cooling, as they have to operate at cryogenic temperatures (typically 80 K). Large arrays (up to 2048×2048 pixels) are available. HgCdTe and PtSi are materials with similar use.
A layer of indium antimonide sandwiched between layers of aluminium indium antimonide can act as a quantum well. In such a heterostructure InSb/AlInSb has recently been shown to exhibit a robust quantum Hall effect. This approach is studied in order to construct very fast transistors. Bipolar transistors operating at frequencies up to 85 GHz were constructed from indium antimonide in the late 1990s; field-effect transistors operating at over 200 GHz have been reported more recently (Intel/QinetiQ). Some models suggest that terahertz frequencies are achievable with this material. Indium antimonide semiconductor devices are also capable of operating with voltages under 0.5 V, reducing their power requirements.
Growth methods
InSb can be grown by solidifying a melt from the liquid state (Czochralski process), or epitaxially by liquid phase epitaxy, hot wall epitaxy or molecular beam epitaxy. It can also be grown from organometallic compounds by MOVPE.
Device applications
Thermal image detectors using photodiodes or photoelectromagnetic detectors
Magnetic field sensors using magnetoresistance or the Hall effect
Fast transistors (in terms of dynamic switching). This is due to the high carrier mobility of InSb
In some of the detectors of the Infrared Array Camera on the Spitzer Space Telescope
References
Cited sources
External links
National Compound Semiconductor Roadmap at the Office of Naval Research
Material safety data sheet at University of Texas at Dallas
Antimonides
Indium compounds
III-V semiconductors
Infrared sensor materials
III-V compounds
Zincblende crystal structure | Indium antimonide | [
"Chemistry"
] | 967 | [
"Semiconductor materials",
"III-V compounds",
"Inorganic compounds",
"III-V semiconductors"
] |
1,762,966 | https://en.wikipedia.org/wiki/RNA%20polymerase%20I | RNA polymerase 1 (also known as Pol I) is, in higher eukaryotes, the polymerase that only transcribes ribosomal RNA (but not 5S rRNA, which is synthesized by RNA polymerase III), a type of RNA that accounts for over 50% of the total RNA synthesized in a cell.
Structure and function
Pol I is a 590 kDa enzyme that consists of 14 protein subunits (polypeptides), and its crystal structure in the yeast Saccharomyces cerevisiae was solved at 2.8Å resolution in 2013. Twelve of its subunits have identical or related counterparts in RNA polymerase II (Pol II) and RNA polymerase III (Pol III). The other two subunits are related to Pol II initiation factors and have structural homologues in Pol III.
Ribosomal DNA transcription is confined to the nucleolus, where about 400 copies of the 42.9-kb rDNA gene are present, arranged as tandem repeats in nucleolus organizer regions. Each copy contains a ~13.3 kb sequence encoding the 18S, the 5.8S, and the 28S RNA molecules, interlaced with two internal transcribed spacers, ITS1 and ITS2, and flanked upstream by a 5' external transcribed spacer and a downstream 3' external transcribed spacer. These components are transcribed together to form the 45S pre-rRNA. The 45S pre-rRNA is then post-transcriptionally cleaved by C/D box and H/ACA box snoRNAs, removing the two spacers and resulting in the three rRNAs by a complex series of steps. The 5S ribosomal RNA is transcribed by Pol III. Because of the simplicity of Pol I transcription, it is the fastest-acting polymerase and contributes up to 60% of cellular transcription levels in exponentially growing cells.
In Saccharomyces cerevisiae, the 5S rDNA has the unusual feature of lying inside the rDNA repeat. It is flanked by non-transcribed spacers NTS1 and NTS2, and is transcribed backwards by Pol III, separately from the rest of the rDNA.
Regulation of rRNA transcription
The rate of cell growth is directly dependent on the rate of protein synthesis, which is itself intricately linked to ribosome synthesis and rRNA transcription. Thus, intracellular signals must coordinate the synthesis of rRNA with that of other components of protein translation. Myc is known to bind to human ribosomal DNA in order to stimulate rRNA transcription by RNA polymerase I. Two specific mechanisms have been identified, ensuring proper control of rRNA synthesis and Pol I-mediated transcription.
Given the large numbers of rDNA genes (several hundreds) available for transcription, the first mechanism involves adjustments in the number of genes being transcribed at a specific time. In mammalian cells, the number of active rDNA genes varies between cell types and level of differentiation. In general, as a cell becomes more differentiated, it requires less growth and, therefore, will have a decrease in rRNA synthesis and a decrease in rDNA genes being transcribed. When rRNA synthesis is stimulated, SL1 (selectivity factor 1) will bind to the promoters of rDNA genes that were previously silent, and recruit a pre-initiation complex to which Pol I will bind and start transcription of rRNA.
Changes in rRNA transcription can also occur via changes in the rate of transcription. While the exact mechanism through which Pol I increases its rate of transcription is as yet unknown, evidence has shown that rRNA synthesis can increase or decrease without changes in the number of actively transcribed rDNA.
Transcription cycle
In the process of transcription (by any polymerase), there are three main stages:
Initiation: the construction of the RNA polymerase complex on the gene's promoter with the help of transcription factors
Elongation: the actual transcription of the majority of the gene into a corresponding RNA sequence
Termination: the cessation of RNA transcription and the disassembly of the RNA polymerase complex.
Initiation
Pol I requires no TATA box in the promoter, instead relying on an upstream control element (UCE) located between −200 and −107, and a core element located between −45 and +20.
The dimeric eukaryotic upstream binding factor (UBF) binds the UCE and the core element.
UBF recruits and binds a protein complex called SL1 in humans (or TIF-IB in mouse), composed of the TATA-binding protein (TBP) and three TBP-associated factors (TAFs).
The UBF dimer contains several high-mobility-group boxes (HMG-boxes) that introduce loops into the upstream region, allowing the UCE and the core elements to come into contact.
RRN3/TIF-IA is phosphorylated and binds Pol I.
Pol I binds to the UBF/SL1 complex via RRN3/TIF-IA, and transcription starts.
Note that this process is variable in different organisms.
Elongation
As Pol I escapes and clears the promoter, UBF and SL1 remain-promoter bound, ready to recruit another Pol I. Indeed, each active rDNA gene can be transcribed multiple times simultaneously, as opposed to Pol II-transcribed genes, which associate with only one complex at a time. While elongation proceeds unimpeded in vitro, it is unclear at this point whether this process happens in a cell, given the presence of nucleosomes. Pol I does seem to transcribe through nucleosomes, either bypassing or disrupting them, perhaps assisted by chromatin-remodeling activities. In addition, UBF might also act as positive feedback, enhancing Pol I elongation through an anti-repressor function. An additional factor, TIF-IC, can also stimulate the overall rate of transcription and suppress pausing of Pol I. As Pol I proceeds along the rDNA, supercoils form both ahead of and behind the complex. These are unwound by topoisomerase I or II at regular intervals, similar to what is seen in Pol II-mediated transcription.
Elongation is likely to be interrupted at sites of DNA damage. Transcription-coupled repair occurs similarly to Pol II-transcribed genes and requires the presence of several DNA repair proteins, such as TFIIH, CSB, and XPG.
Termination
In higher eukaryotes, TTF-I binds and bends the termination site at the 3' end of the transcribed region. This will force Pol I to pause. TTF-I, with the help of transcript-release factor PTRF and a T-rich region, will induce Pol I into terminating transcription and dissociating from the DNA and the new transcript. Evidence suggests that termination might be rate-limiting in cases of high rRNA production. TTF-I and PTRF will then indirectly stimulate the reinitiation of transcription by Pol I at the same rDNA gene.
In organisms such as budding yeast the process seems to be much more complicated and is still not completely elucidated.
Recombination hotspot
Recombination hotspots are DNA sequences that increase local recombination. The HOT1 sequence in yeast is one of the most well studied mitotic recombination hotspots. The HOT1 sequence includes an RNA polymerase I transcription promoter. In a yeast mutant strain defective in RNA polymerase I the HOT1 activity in promoting recombination is abolished. The level of RNA polymerase I transcription activity that is dependent on the promoter in the HOT1 sequence appears to determine the level of nearby mitotic recombination.
See also
RNA polymerase
RNA polymerase II
RNA polymerase III
Selective factor 1
References
EC 2.7.7
Gene expression
Proteins | RNA polymerase I | [
"Chemistry",
"Biology"
] | 1,612 | [
"Biomolecules by chemical classification",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
1,763,396 | https://en.wikipedia.org/wiki/Tensor%20density | In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another (see tensor field), except that it is additionally multiplied or weighted by a power W of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W are called tensor capacity. A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle.
Motivation
In physics and related fields, it is often useful to work with the components of an algebraic object rather than the object itself. An example would be decomposing a vector into a sum of basis vectors weighted by some coefficients such as
where is a vector in 3-dimensional Euclidean space, are the usual standard basis vectors in Euclidean space. This is usually necessary for computational purposes, and can often be insightful when algebraic objects represent complex abstractions but their components have concrete interpretations. However, with this identification, one has to be careful to track changes of the underlying basis in which the quantity is expanded; it may in the course of a computation become expedient to change the basis while the vector remains fixed in physical space. More generally, if an algebraic object represents a geometric object, but is expressed in terms of a particular basis, then it is necessary to, when the basis is changed, also change the representation. Physicists will often call this representation of a geometric object a tensor if it transforms under a sequence of linear maps given a linear change of basis (although confusingly others call the underlying geometric object which hasn't changed under the coordinate transformation a "tensor", a convention this article strictly avoids). In general there are representations which transform in arbitrary ways depending on how the geometric invariant is reconstructed from the representation. In certain special cases it is convenient to use representations which transform almost like tensors, but with an additional, nonlinear factor in the transformation. A prototypical example is a matrix representing the cross product (area of spanned parallelogram) on The representation is given by in the standard basis by
If we now try to express this same expression in a basis other than the standard basis, then the components of the vectors will change, say according to where is some 2 by 2 matrix of real numbers. Given that the area of the spanned parallelogram is a geometric invariant, it cannot have changed under the change of basis, and so the new representation of this matrix must be:
which, when expanded is just the original expression but multiplied by the determinant of which is also In fact this representation could be thought of as a two index tensor transformation, but instead, it is computationally easier to think of the tensor transformation rule as multiplication by rather than as 2 matrix multiplications (In fact in higher dimensions, the natural extension of this is matrix multiplications, which for large is completely infeasible). Objects which transform in this way are called tensor densities because they arise naturally when considering problems regarding areas and volumes, and so are frequently used in integration.
Definition
Some authors classify tensor densities into the two types called (authentic) tensor densities and pseudotensor densities in this article. Other authors classify them differently, into the types called even tensor densities and odd tensor densities. When a tensor density weight is an integer there is an equivalence between these approaches that depends upon whether the integer is even or odd.
Note that these classifications elucidate the different ways that tensor densities may transform somewhat pathologically under orientation-reversing coordinate transformations. Regardless of their classifications into these types, there is only one way that tensor densities transform under orientation-preserving coordinate transformations.
In this article we have chosen the convention that assigns a weight of +2 to , the determinant of the metric tensor expressed with covariant indices. With this choice, classical densities, like charge density, will be represented by tensor densities of weight +1. Some authors use a sign convention for weights that is the negation of that presented here.
In contrast to the meaning used in this article, in general relativity "pseudotensor" sometimes means an object that does not transform like a tensor or relative tensor of any weight.
Tensor and pseudotensor densities
For example, a mixed rank-two (authentic) tensor density of weight transforms as:
((authentic) tensor density of (integer) weight W)
where is the rank-two tensor density in the coordinate system, is the transformed tensor density in the coordinate system; and we use the Jacobian determinant. Because the determinant can be negative, which it is for an orientation-reversing coordinate transformation, this formula is applicable only when is an integer. (However, see even and odd tensor densities below.)
We say that a tensor density is a pseudotensor density when there is an additional sign flip under an orientation-reversing coordinate transformation. A mixed rank-two pseudotensor density of weight transforms as
(pseudotensor density of (integer) weight W)
where is a function that returns +1 when its argument is positive or −1 when its argument is negative.
Even and odd tensor densities
The transformations for even and odd tensor densities have the benefit of being well defined even when is not an integer. Thus one can speak of, say, an odd tensor density of weight +2 or an even tensor density of weight −1/2.
When is an even integer the above formula for an (authentic) tensor density can be rewritten as
(even tensor density of weight W)
Similarly, when is an odd integer the formula for an (authentic) tensor density can be rewritten as
(odd tensor density of weight W)
Weights of zero and one
A tensor density of any type that has weight zero is also called an absolute tensor. An (even) authentic tensor density of weight zero is also called an ordinary tensor.
If a weight is not specified but the word "relative" or "density" is used in a context where a specific weight is needed, it is usually assumed that the weight is +1.
Algebraic properties
A linear combination (also known as a weighted sum) of tensor densities of the same type and weight is again a tensor density of that type and weight.
A product of two tensor densities of any types, and with weights and , is a tensor density of weight
A product of authentic tensor densities and pseudotensor densities will be an authentic tensor density when an even number of the factors are pseudotensor densities; it will be a pseudotensor density when an odd number of the factors are pseudotensor densities. Similarly, a product of even tensor densities and odd tensor densities will be an even tensor density when an even number of the factors are odd tensor densities; it will be an odd tensor density when an odd number of the factors are odd tensor densities.
The contraction of indices on a tensor density with weight again yields a tensor density of weight
Using (2) and (3) one sees that raising and lowering indices using the metric tensor (weight 0) leaves the weight unchanged.
Matrix inversion and matrix determinant of tensor densities
If is a non-singular matrix and a rank-two tensor density of weight with covariant indices then its matrix inverse will be a rank-two tensor density of weight − with contravariant indices. Similar statements apply when the two indices are contravariant or are mixed covariant and contravariant.
If is a rank-two tensor density of weight with covariant indices then the matrix determinant will have weight where is the number of space-time dimensions. If is a rank-two tensor density of weight with contravariant indices then the matrix determinant will have weight The matrix determinant will have weight
General relativity
Relation of Jacobian determinant and metric tensor
Any non-singular ordinary tensor transforms as
where the right-hand side can be viewed as the product of three matrices. Taking the determinant of both sides of the equation (using that the determinant of a matrix product is the product of the determinants), dividing both sides by and taking their square root gives
When the tensor is the metric tensor, and is a locally inertial coordinate system where diag(−1,+1,+1,+1), the Minkowski metric, then −1 and so
where is the determinant of the metric tensor
Use of metric tensor to manipulate tensor densities
Consequently, an even tensor density, of weight W, can be written in the form
where is an ordinary tensor. In a locally inertial coordinate system, where it will be the case that and will be represented with the same numbers.
When using the metric connection (Levi-Civita connection), the covariant derivative of an even tensor density is defined as
For an arbitrary connection, the covariant derivative is defined by adding an extra term, namely
to the expression that would be appropriate for the covariant derivative of an ordinary tensor.
Equivalently, the product rule is obeyed
where, for the metric connection, the covariant derivative of any function of is always zero,
Examples
The expression is a scalar density. By the convention of this article it has a weight of +1.
The density of electric current (for example, is the amount of electric charge crossing the 3-volume element divided by that element — do not use the metric in this calculation) is a contravariant vector density of weight +1. It is often written as or where and the differential form are absolute tensors, and where is the Levi-Civita symbol; see below.
The density of Lorentz force (that is, the linear momentum transferred from the electromagnetic field to matter within a 4-volume element divided by that element — do not use the metric in this calculation) is a covariant vector density of weight +1.
In N-dimensional space-time, the Levi-Civita symbol may be regarded as either a rank-N covariant (odd) authentic tensor density of weight −1 () or a rank-N contravariant (odd) authentic tensor density of weight +1 (). Notice that the Levi-Civita symbol (so regarded) does obey the usual convention for raising or lowering of indices with the metric tensor. That is, it is true that
but in general relativity, where is always negative, this is never equal to
The determinant of the metric tensor,
is an (even) authentic scalar density of weight +2, being the contraction of the product of 2 (odd) authentic tensor densities of weight +1 and four (even) authentic tensor densities of weight 0.
See also
Notes
References
.
.
Differential geometry
Density
Density | Tensor density | [
"Physics",
"Engineering"
] | 2,266 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
1,763,424 | https://en.wikipedia.org/wiki/Turbidity%20current | A turbidity current is most typically an underwater current of usually rapidly moving, sediment-laden water moving down a slope; although current research (2018) indicates that water-saturated sediment may be the primary actor in the process. Turbidity currents can also occur in other fluids besides water.
Researchers from the Monterey Bay Aquarium Research Institute found that a layer of water-saturated sediment moved rapidly over the seafloor and mobilized the upper few meters of the preexisting seafloor. Plumes of sediment-laden water were observed during turbidity current events but they believe that these were secondary to the pulse of the seafloor sediment moving during the events. The belief of the researchers is that the water flow is the tail-end of the process that starts at the seafloor.
In the most typical case of oceanic turbidity currents, sediment laden waters situated over sloping ground will flow down-hill because they have a higher density than the adjacent waters. The driving force behind a turbidity current is gravity acting on the high density of the sediments temporarily suspended within a fluid. These semi-suspended solids make the average density of the sediment bearing water greater than that of the surrounding, undisturbed water.
As such currents flow, they often have a "snow-balling-effect", as they stir up the ground over which they flow, and gather even more sedimentary particles in their current. Their passage leaves the ground over which they flow scoured and eroded. Once an oceanic turbidity current reaches the calmer waters of the flatter area of the abyssal plain (main oceanic floor), the particles borne by the current settle out of the water column. The sedimentary deposit of a turbidity current is called a turbidite.
Seafloor turbidity currents are often the result of sediment-laden river outflows, and can sometimes be initiated by earthquakes, slumping and other soil disturbances. They are characterized by a well-defined advance-front, also known as the current's head, and are followed by the current's main body. In terms of the more often observed and more familiar above sea-level phenomenon, they somewhat resemble flash floods.
Turbidity currents can sometimes result from submarine seismic instability, which is common with steep underwater slopes, and especially with submarine trench slopes of convergent plate margins, continental slopes and submarine canyons of passive margins. With an increasing continental shelf slope, current velocity increases, as the velocity of the flow increases, turbulence increases, and the current draws up more sediment. The increase in sediment also adds to the density of the current, and thus increases its velocity even further.
Definition
Turbidity currents are traditionally defined as those sediment gravity flows in which sediment is suspended by fluid turbulence.
However, the term "turbidity current" was adopted to describe a natural phenomenon whose exact nature is often unclear. The turbulence within a turbidity current is not always the support mechanism that keeps the sediment in suspension; however it is probable that turbulence is the primary or sole grain support mechanism in dilute currents (<3%). Definitions are further complicated by an incomplete understanding of the turbulence structure within turbidity currents, and the confusion between the terms turbulent (i.e. disturbed by eddies) and turbid (i.e. opaque with sediment). Kneller & Buckee, 2000 define a suspension current as 'flow induced by the action of gravity upon a turbid mixture of fluid and (suspended) sediment, by virtue of the density difference between the mixture and the ambient fluid'. A turbidity current is a suspension current in which the interstitial fluid is a liquid (generally water); a pyroclastic current is one in which the interstitial fluid is gas.
Triggers
Hyperpycnal plume
When the concentration of suspended sediment at the mouth of a river is so large that the density of river water is greater than the density of sea water a particular kind of turbidity current can form called a hyperpycnal plume. The average concentration of suspended sediment for most river water that enters the ocean is much lower than the sediment concentration needed for entry as a hyperpycnal plume. Although some rivers can often have continuously high sediment load that can create a continuous hyperpycnal plume, such as the Haile River (China), which has an average suspended concentration of 40.5 kg/m3. The sediment concentration needed to produce a hyperpycnal plume in marine water is 35 to 45 kg/m3, depending on the water properties within the coastal zone. Most rivers produce hyperpycnal flows only during exceptional events, such as storms, floods, glacier outbursts, dam breaks, and lahar flows. In fresh water environments, such as lakes, the suspended sediment concentration needed to produce a hyperpycnal plume is quite low (1 kg/m3).
Sedimentation in reservoirs
The transport and deposition of the sediments in narrow alpine reservoirs is often caused by turbidity currents. They follow the thalweg of the lake to the deepest area near the dam, where the sediments can affect the operation of the bottom outlet and the intake structures. Controlling this sedimentation within the reservoir can be achieved by using solid and permeable obstacles with the right design.
Earthquake triggering
Turbidity currents are often triggered by tectonic disturbances of the sea floor. The displacement of continental crust in the form of fluidization and physical shaking both contribute to their formation. Earthquakes have been linked to turbidity current deposition in many settings, particularly where physiography favors preservation of the deposits and limits the other sources of turbidity current deposition. Since the famous case of breakage of submarine cables by a turbidity current following the 1929 Grand Banks earthquake, earthquake triggered turbidites have been investigated and verified along the Cascadia subduction Zone, the Northern San Andreas Fault, a number of European, Chilean and North American lakes, Japanese lacustrine and offshore regions and a variety of other settings.
Canyon-flushing
When large turbidity currents flow into canyons they may become self-sustaining, and may entrain sediment that has previously been introduced into the canyon by littoral drift, storms or smaller turbidity currents. Canyon-flushing associated with surge-type currents initiated by slope failures may produce currents whose final volume may be several times that of the portion of the slope that has failed (e.g. Grand Banks).
Slumping
Sediment that has piled up at the top of the continental slope, particularly at the heads of submarine canyons can create turbidity current due to overloading, thus consequent slumping and sliding.
Convective sedimentation beneath river plumes
A buoyant sediment-laden river plume can induce a secondary turbidity current on the ocean floor by the process of convective sedimentation. Sediment in the initially buoyant hypopycnal flow accumulates at the base of the surface flow, so that the dense lower boundary become unstable. The resulting convective sedimentation leads to a rapid vertical transfer of material to the sloping lake or ocean bed, potentially forming a secondary turbidity current. The vertical speed of the convective plumes can be much greater than the Stokes settling velocity of an individual particle of sediment. Most examples of this process have been made in the laboratory, but possible observational evidence of a secondary turbidity current was made in Howe Sound, British Columbia, where a turbidity current was periodically observed on the delta of the Squamish River. As the vast majority of sediment laden rivers are less dense than the ocean, rivers cannot readily form plunging hyperpycnal flows. Hence convective sedimentation is an important possible initiation mechanism for turbidity currents.
Effect on ocean floor
Large and fast-moving turbidity currents can carve gulleys and ravines into the ocean floor of continental margins and cause damage to artificial structures such as telecommunication cables on the seafloor. Understanding where turbidity currents flow on the ocean floor can help to decrease the amount of damage to telecommunication cables by avoiding these areas or reinforcing the cables in vulnerable areas.
When turbidity currents interact with regular ocean currents, such as contour currents, they can change their direction. This ultimately shifts submarine canyons and sediment deposition locations. One example of this is located in the western part of the Gulf of Cadiz, where the ocean current leaving the Mediterranean Sea (also known as the Mediterranean outflow water) pushes turbidity currents westward. This has changed the shape of submarine valleys and canyons in the region to also curve in that direction.
Deposits
When the energy of a turbidity current lowers, its ability to keep suspended sediment decreases, thus sediment deposition occurs. When the material comes to rest, it is the sand and other coarse material which settles first followed by mud and eventually the very fine particulate matter. It is this sequence of deposition that creates the so called Bouma sequences that characterize turbidite deposits.
Because turbidity currents occur underwater and happen suddenly, they are rarely seen as they happen in nature, thus turbidites can be used to determine turbidity current characteristics. Some examples: grain size can give indication of current velocity, grain lithology and the use of foraminifera for determining origins, grain distribution shows flow dynamics over time and sediment thickness indicates sediment load and longevity.
Turbidites are commonly used in the understanding of past turbidity currents, for example, the Peru-Chile Trench off Southern Central Chile (36°S–39°S) contains numerous turbidite layers that were cored and analysed. From these turbidites the predicted history of turbidity currents in this area was determined, increasing the overall understanding of these currents.
Antidune deposits
Some of the largest antidunes on Earth are formed by turbidity currents. One observed sediment-wave field is located on the lower continental slope off Guyana, South America. This sediment-wave field covers an area of at least 29 000 km2 at a water depth of 4400–4825 meters. These antidunes have wavelengths of 110–2600 m and wave heights of 1–15 m. Turbidity currents responsible for wave generation are interpreted as originating from slope failures on the adjacent Venezuela, Guyana and Suriname continental margins. Simple numerical modelling has been enabled to determine turbidity current flow characteristics across the sediment waves to be estimated: internal Froude number = 0.7–1.1, flow thickness = 24–645 m, and flow velocity = 31–82 cm·s−1. Generally, on lower gradients beyond minor breaks of slope, flow thickness increases and flow velocity decreases, leading to an increase in wavelength and a decrease in height.
Reversing buoyancy
The behaviour of turbidity currents with buoyant fluid (such as currents with warm, fresh or brackish interstitial water entering the sea) has been investigated to find that the front speed decreases more rapidly than that of currents with the same density as the ambient fluid. These turbidity currents ultimately come to a halt as sedimentation results in a reversal of buoyancy, and the current lifts off, the point of lift-off remaining constant for a constant discharge. The lofted fluid carries fine sediment with it, forming a plume that rises to a level of neutral buoyancy (if in a stratified environment) or to the water surface, and spreads out. Sediment falling from the plume produces a widespread fall-out deposit, termed hemiturbidite. Experimental turbidity currents and field observations suggest that the shape of the lobe deposit formed by a lofting plume is narrower than for a similar non-lofting plume
Prediction
Prediction of erosion by turbidity currents, and of the distribution of turbidite deposits, such as their extent, thickness and grain size distribution, requires an understanding of the mechanisms of sediment transport and deposition, which in turn depends on the fluid dynamics of the currents.
The extreme complexity of most turbidite systems and beds has promoted the development of quantitative models of turbidity current behaviour inferred solely from their deposits. Small-scale laboratory experiments therefore offer one of the best means of studying their dynamics. Mathematical models can also provide significant insights into current dynamics. In the long term, numerical techniques are most likely the best hope of understanding and predicting three-dimensional turbidity current processes and deposits. In most cases, there are more variables than governing equations, and the models rely upon simplifying assumptions in order to achieve a result. The accuracy of the individual models thus depends upon the validity and choice of the assumptions made. Experimental results provide a means of constraining some of these variables as well as providing a test for such models. Physical data from field observations, or more practical from experiments, are still required in order to test the simplifying assumptions necessary in mathematical models. Most of what is known about large natural turbidity currents (i.e. those significant in terms of sediment transfer to the deep sea) is inferred from indirect sources, such as submarine cable breaks and heights of deposits above submarine valley floors. Although during the 2003 Tokachi-oki earthquake a large turbidity current was observed by the cabled observatory which provided direct observations, which is rarely achieved.
Oil exploration
Oil and gas companies are also interested in turbidity currents because the currents deposit organic matter that over geologic time gets buried, compressed and transformed into hydrocarbons. The use of numerical modelling and flumes are commonly used to help understand these questions. Much of the modelling is used to reproduce the physical processes which govern turbidity current behaviour and deposits.
Modeling approaches
Shallow-water models
The so-called depth-averaged or shallow-water models are initially introduced for compositional gravity currents
and then later extended to turbidity currents. The typical assumptions used along with the shallow-water models are: hydrostatic pressure field, clear fluid is not entrained (or detrained), and particle concentration does not depend on the vertical location. Considering the ease of implementation, these models can typically predict flow characteristic such as front location or front speed in simplified geometries, e.g. rectangular channels, fairly accurately.
Depth-resolved models
With the increase in computational power, depth-resolved models have become a powerful tool to study gravity and turbidity currents. These models, in general, are mainly focused on the solution of the Navier-Stokes equations for the fluid phase.
With dilute suspension of particles, a Eulerian approach proved to be accurate to describe the evolution of particles in terms of a continuum particle concentration field. Under these models, no such assumptions as shallow-water models are needed and, therefore, accurate calculations and measurements are performed to study these currents. Measurements such as, pressure field, energy budgets, vertical particle concentration and accurate deposit heights are a few to mention. Both Direct numerical simulation (DNS) and Turbulence modeling are used to model these currents.
Notable examples of turbidity currents
Within minutes after the 1929 Grand Banks earthquake occurred off the coast of Newfoundland, transatlantic telephone cables began breaking sequentially, farther and farther downslope, away from the epicenter. Twelve cables were snapped in a total of 28 places. Exact times and locations were recorded for each break. Investigators suggested that an estimated 60 mile per hour (100 km/h) submarine landslide or turbidity current of water saturated sediments swept 400 miles (600 km) down the continental slope from the earthquake's epicenter, snapping the cables as it passed. Subsequent research of this event have shown that continental slope sediment failures mostly occurred below 650 meter water depth. The slumping that occurred in shallow waters (5–25 meters) passed down slope into turbidity currents that evolved ignitively. The turbidity currents had sustained flow for many hours due to the delayed retrogressive failure and transformation of debris flows into turbidity currents through hydraulic jumps.
The Cascadia subduction zone, off the northwestern coast of North America, has a record of earthquake triggered turbidites that is well-correlated to other evidence of earthquakes recorded in coastal bays and lakes during the Holocene. Forty–one Holocene turbidity currents have been correlated along all or part of the approximately 1000 km long plate boundary stretching from northern California to mid-Vancouver island. The correlations are based on radiocarbon ages and subsurface stratigraphic methods. The inferred recurrence interval of Cascadia great earthquakes is approximately 500 years along the northern margin, and approximately 240 years along the southern margin.
Taiwan is a hot spot for submarine turbidity currents as there are large amounts of sediment suspended in rivers, and it is seismically active, thus large accumulation of seafloor sediments and earthquake triggering. During the 2006 Pingtung earthquake off SW Taiwan, eleven submarine cables across the Kaoping canyon and Manila Trench were broken in sequence from 1500 to 4000 m deep, as a consequence of the associated turbidity currents. From the timing of each cable break the velocity of the current was determined to have a positive relationship with bathymetric slope. Current velocities were on the steepest slopes and on the shallowest slopes.
One of the earliest observations of a turbidity currents was by François-Alphonse Forel. In the late 1800s he made detailed observations of the plunging of the Rhône river into Lake Geneva at Port Valais. These papers were possibly the earliest identification of a turbidity current and he discussed how the submarine channel formed from the delta. In this freshwater lake, it is primarily the cold water that leads to plunging of the inflow. The sediment load by itself is generally not high enough to overcome the summer thermal stratification in Lake Geneva.
The longest turbidity current ever recorded occurred in January 2020 and flowed for through the Congo Canyon over the course of two days, damaging two submarine communications cables. The current was a result of sediment deposited by the 2019–2020 Congo River floods.
See also
Bouma sequence
Gravity current
High-density turbidity currents (Lowe sequence)
Submarine landslide
Sediment gravity flows
References
External links
Turbidity current in motion
Start of a turbidity current .
Depth-resolved simulation of turbidity currents.
Sedimentology
Fluid dynamics
Ocean currents
he:התנזלות קרקע#זרמי עכירות | Turbidity current | [
"Chemistry",
"Engineering"
] | 3,844 | [
"Piping",
"Ocean currents",
"Chemical engineering",
"Fluid dynamics"
] |
1,763,516 | https://en.wikipedia.org/wiki/Embedded%20Java | Embedded Java refers to versions of the Java program language that are designed for embedded systems. Since 2010 embedded Java implementations have come closer to standard Java, and are now virtually identical to the Java Standard Edition. Since Java 9 customization of the Java Runtime through modularization removes the need for specialized Java profiles targeting embedded devices.
History
Although in the past some differences existed between embedded Java and traditional PC based Java, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (such as consumers, industrial, white goods, healthcare, metering, smart markets in general)
CORE embedded Java API for a unified Embedded Java ecosystem
In order for a software component to run on any Java system, it must target the core minimal API provided by the different providers of the embedded Java ecosystem. Companies share the same eight packages of pre-written programs. The packages (java.lang, java.io, java.util, ... ) form the CORE Embedded Java API, which means that embedded programmers using the Java language can use them in order to make any worthwhile use of the Java language.
Old distinctions between SE embedded API and ME embedded API from ORACLE
Java SE embedded is based on desktop Java Platform, Standard Edition. It is designed to be used on systems with at least 32 MB of RAM, and can work on Linux ARM, x86, or Power ISA, and Windows XP and Windows XP Embedded architectures.
Java ME embedded used to be based on the Connected Device Configuration subset of Java Platform, Micro Edition. It is designed to be used on systems with at least 8 MB of RAM, and can work on Linux ARM, PowerPC, or MIPS architecture.
See also
Excelsior JET Embedded
Sun SPOT Sun SPOT Project
Real-Time Specification for Java
Azul Systems
JamaicaVM
STM32 STM32J part numbers. MCU that embeds an embedded Java engine
References
External links
Core java For interview :Core java Interview Questions
Java SE for Embedded : Java SE for Embedded technology from Oracle Corporation
Java SE for Embedded Development Made Easy : Webcast covering development, troubleshooting, deployment
Java platform
Embedded systems
Java virtual machine | Embedded Java | [
"Technology",
"Engineering"
] | 481 | [
"Computing platforms",
"Computer engineering",
"Embedded systems",
"Computer systems",
"Computer science",
"Java platform"
] |
1,763,624 | https://en.wikipedia.org/wiki/Pneumococcal%20polysaccharide%20vaccine | Pneumococcal polysaccharide vaccine, sold under the brand name Pneumovax 23, is a pneumococcal vaccine that is used for the prevention of pneumococcal disease caused by the 23 serotypes of Streptococcus pneumoniae contained in the vaccine as capsular polysaccharides. It is given by intramuscular or subcutaneous injection.
The polysaccharide antigens were used to induce type-specific antibodies that enhanced opsonization, phagocytosis, and killing of Streptococcus pneumoniae (pneumococcal) bacteria by phagocytic immune cells. The pneumococcal polysaccharide vaccine is widely used in high-risk adults.
First used in 1945, the tetravalent vaccine was not widely distributed, since its deployment coincided with the discovery of penicillin. In the 1970s, Robert Austrian championed the manufacture and distribution of a 14-valent pneumococcal polysaccharide vaccine. This evolved in 1983 to a 23-valent formulation (PPSV23). A significant breakthrough affecting the burden of pneumococcal disease was the licensing of a protein conjugate heptavalent vaccine (PCV7) beginning in February 2000.
Medical uses
In the United States, pneumococcal vaccine, polyvalent is indicated for active immunization for the prevention of pneumococcal disease caused by the 23 serotypes contained in the vaccine (1, 2, 3, 4, 5, 6B, 7F, 8, 9N, 9V, 10A, 11A, 12F, 14, 15B, 17F, 18C, 19F, 19A, 20, 22F, 23F, and 33F). It is approved for use in people 50 years of age or older and people aged two years of age or older who are at increased risk for pneumococcal disease. The World Health Organization (WHO) recommendations are similar. The WHO does not recommend use of pneumococcal polysaccharide vaccine in routine childhood immunization programs. The recommendations in the UK are similar, but include people with occupational hazards.
Pneumococcal vaccine may be beneficial to control exacerbations of chronic obstructive pulmonary disease (COPD).
The pneumococcal polysaccharide vaccine is important for those with HIV/AIDS. In Canadian patients infected with HIV, the vaccine has been reported to decrease the incidence of invasive pneumococcal disease from 768 per 100,000 person–years to 244 per 100,000 patient–years. Because of the low level of evidence for benefit, 2008 WHO guidelines do not recommend routine immunization with PPV-23 for HIV patients, and suggests preventing pneumococcal disease indirectly with trimethoprim–sulfamethoxazole chemoprophylaxis and antiretrovirals. While the U.S. Centers for Disease Control and Prevention (CDC) recommends immunization in all patients infected with HIV.
Adverse events
The most common adverse reactions (reported in more than 10% of subjects vaccinated with pneumococcal polysaccharide vaccine in clinical trials) were: pain, soreness or tenderness at the site of injection (60.0%), injection-site swelling or temporary thickening or hardening of the skin (20.3%), headache (17.6%), injection-site redness (16.4%), weakness and fatigue (13.2%), and muscle pain (11.9%).
Vaccination schedule
Adults and children over two years of age
The 23-valent vaccine (for example, Pneumovax 23) is effective against 23 different pneumococcal capsular types (serotypes 1, 2, 3, 4, 5, 6B, 7F, 8, 9N, 9V, 10A, 11A, 12F, 14, 15B, 17F, 18C, 19A, 19F, 20, 22F, 23F, and 33F), and so covers 90 percent of the types found in pneumococcal bloodstream infections.
Young children
Children under the age of two years fail to mount an adequate response to the 23-valent adult vaccine, and instead a 13-valent pneumococcal conjugated vaccine (PCV13; for example, Prevnar 13) is used instead. PCV13 replaced PCV7, adding six new serotypes to the vaccine. While this covers only thirteen strains out of more than ninety strains, these thirteen strains caused 80–90 percent of cases of severe pneumococcal disease in the U.S. before introduction of the vaccine, and it is considered to be nearly 100 percent effective against these strains.
Special risk-groupsChildren at special risk (e.g., sickle cell disease and those without a functioning spleen) require additional protection using the PCV13, with the more extensive PPSV-23 given after the second year of life or two months after the PCV13 dose:
References
Further reading
External links
Pneumococcal Disease World Health Organization (WHO)
Vaccines | Pneumococcal polysaccharide vaccine | [
"Biology"
] | 1,129 | [
"Vaccination",
"Vaccines"
] |
1,764,022 | https://en.wikipedia.org/wiki/Polar%20set | In functional and convex analysis, and related disciplines of mathematics, the polar set is a special convex set associated to any subset of a vector space lying in the dual space
The bipolar of a subset is the polar of but lies in (not ).
Definitions
There are at least three competing definitions of the polar of a set, originating in projective geometry and convex analysis.
In each case, the definition describes a duality between certain subsets of a pairing of vector spaces over the real or complex numbers ( and are often topological vector spaces (TVSs)).
If is a vector space over the field then unless indicated otherwise, will usually, but not always, be some vector space of linear functionals on and the dual pairing will be the bilinear () defined by
If is a topological vector space then the space will usually, but not always, be the continuous dual space of in which case the dual pairing will again be the evaluation map.
Denote the closed ball of radius centered at the origin in the underlying scalar field of by
Functional analytic definition
Absolute polar
Suppose that is a pairing.
The polar or absolute polar of a subset of is the set:
where denotes the image of the set under the map defined by
If denotes the convex balanced hull of which by definition is the smallest convex and balanced subset of that contains then
This is an affine shift of the geometric definition;
it has the useful characterization that the functional-analytic polar of the unit ball (in ) is precisely the unit ball (in ).
The prepolar or absolute prepolar of a subset of is the set:
Very often, the prepolar of a subset of is also called the polar or absolute polar of and denoted by ;
in practice, this reuse of notation and of the word "polar" rarely causes any issues (such as ambiguity) and many authors do not even use the word "prepolar".
The bipolar of a subset of often denoted by is the set ;
that is,
Real polar
The real polar of a subset of is the set:
and the real prepolar of a subset of is the set:
As with the absolute prepolar, the real prepolar is usually called the real polar and is also denoted by
It's important to note that some authors (e.g. [Schaefer 1999]) define "polar" to mean "real polar" (rather than "absolute polar", as is done in this article) and use the notation for it (rather than the notation that is used in this article and in [Narici 2011]).
The real bipolar of a subset of sometimes denoted by is the set ;
it is equal to the -closure of the convex hull of
For a subset of is convex, -closed, and contains
In general, it is possible that but equality will hold if is balanced.
Furthermore, where denotes the balanced hull of
Competing definitions
The definition of the "polar" of a set is not universally agreed upon.
Although this article defined "polar" to mean "absolute polar", some authors define "polar" to mean "real polar" and other authors use still other definitions.
No matter how an author defines "polar", the notation almost always represents choice of the definition (so the meaning of the notation may vary from source to source).
In particular, the polar of is sometimes defined as:
where the notation is standard notation.
We now briefly discuss how these various definitions relate to one another and when they are equivalent.
It is always the case that
and if is real-valued (or equivalently, if and are vector spaces over ) then
If is a symmetric set (that is, or equivalently, ) then where if in addition is real-valued then
If and are vector spaces over (so that is complex-valued) and if (where note that this implies and ), then
where if in addition for all real then
Thus for all of these definitions of the polar set of to agree, it suffices that for all scalars of unit length (where this is equivalent to for all unit length scalar ).
In particular, all definitions of the polar of agree when is a balanced set (which is often, but not always, the case) so that often, which of these competing definitions is used is immaterial.
However, these differences in the definitions of the "polar" of a set do sometimes introduce subtle or important technical differences when is not necessarily balanced.
Specialization for the canonical duality
Algebraic dual space
If is any vector space then let denote the algebraic dual space of which is the set of all linear functionals on The vector space is always a closed subset of the space of all -valued functions on under the topology of pointwise convergence so when is endowed with the subspace topology, then becomes a Hausdorff complete locally convex topological vector space (TVS).
For any subset let
If are any subsets then and where denotes the convex balanced hull of
For any finite-dimensional vector subspace of let denote the Euclidean topology on which is the unique topology that makes into a Hausdorff topological vector space (TVS).
If denotes the union of all closures as varies over all finite dimensional vector subspaces of then (see this footnote
for an explanation).
If is an absorbing subset of then by the Banach–Alaoglu theorem, is a weak-* compact subset of
If is any non-empty subset of a vector space and if is any vector space of linear functionals on (that is, a vector subspace of the algebraic dual space of ) then the real-valued map
defined by
is a seminorm on If then by definition of the supremum, so that the map defined above would not be real-valued and consequently, it would not be a seminorm.
Continuous dual space
Suppose that is a topological vector space (TVS) with continuous dual space
The important special case where and the brackets represent the canonical map:
is now considered.
The triple is the called the associated with
The polar of a subset with respect to this canonical pairing is:
For any subset where denotes the closure of in
The Banach–Alaoglu theorem states that if is a neighborhood of the origin in then and this polar set is a compact subset of the continuous dual space when is endowed with the weak-* topology (also known as the topology of pointwise convergence).
If satisfies for all scalars of unit length then one may replace the absolute value signs by (the real part operator) so that:
The prepolar of a subset of is:
If satisfies for all scalars of unit length then one may replace the absolute value signs with so that:
where
The bipolar theorem characterizes the bipolar of a subset of a topological vector space.
If is a normed space and is the open or closed unit ball in (or even any subset of the closed unit ball that contains the open unit ball) then is the closed unit ball in the continuous dual space when is endowed with its canonical dual norm.
Geometric definition for cones
The polar cone of a convex cone is the set
This definition gives a duality on points and hyperplanes, writing the latter as the intersection of two oppositely-oriented half-spaces.
The polar hyperplane of a point is the locus ;
the dual relationship for a hyperplane yields that hyperplane's polar point.
Some authors (confusingly) call a dual cone the polar cone; we will not follow that convention in this article.
Properties
Unless stated otherwise, will be a pairing.
The topology is the weak-* topology on while is the weak topology on
For any set denotes the real polar of and denotes the absolute polar of
The term "polar" will refer to the polar.
The (absolute) polar of a set is convex and balanced.
The real polar of a subset of is convex but necessarily balanced; will be balanced if is balanced.
If for all scalars of unit length then
is closed in under the weak-*-topology on .
A subset of is weakly bounded (i.e. -bounded) if and only if is absorbing in .
For a dual pair where is a TVS and is its continuous dual space, if is bounded then is absorbing in If is locally convex and is absorbing in then is bounded in Moreover, a subset of is weakly bounded if and only if is absorbing in
The bipolar of a set is the -closed convex hull of that is the smallest -closed and convex set containing both and
Similarly, the bidual cone of a cone is the -closed conic hull of
If is a base at the origin for a TVS then
If is a locally convex TVS then the polars (taken with respect to ) of any 0-neighborhood base forms a fundamental family of equicontinuous subsets of (i.e. given any bounded subset of there exists a neighborhood of the origin in such that ).
Conversely, if is a locally convex TVS then the polars (taken with respect to ) of any fundamental family of equicontinuous subsets of form a neighborhood base of the origin in
Let be a TVS with a topology Then is a locally convex TVS topology if and only if is the topology of uniform convergence on the equicontinuous subsets of
The last two results explain why equicontinuous subsets of the continuous dual space play such a prominent role in the modern theory of functional analysis: because equicontinuous subsets encapsulate all information about the locally convex space 's original topology.
Set relations
and
For all scalars and for all real and
However, for the real polar we have
For any finite collection of sets
If then and
An immediate corollary is that ; equality necessarily holds when is finite and may fail to hold if is infinite.
and
If is a cone in then
If is a family of -closed subsets of containing then the real polar of is the closed convex hull of
If then
For a closed convex cone in a real vector space the polar cone is the polar of ; that is, where
See also
Notes
References
Bibliography
Functional analysis
Linear functionals
Topological vector spaces | Polar set | [
"Mathematics"
] | 2,048 | [
"Functions and mappings",
"Functional analysis",
"Vector spaces",
"Mathematical objects",
"Space (mathematics)",
"Topological vector spaces",
"Mathematical relations"
] |
1,765,158 | https://en.wikipedia.org/wiki/Elevation | The elevation of a geographic location is its height above or below a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic datum § Vertical datum).
The term elevation is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface.
Elevation is not to be confused with the distance from the center of the Earth. Due to the equatorial bulge, the summits of Mount Everest and Chimborazo have, respectively, the largest elevation and the largest geocentric distance.
Aviation
In aviation, the term elevation or aerodrome elevation is defined by the ICAO as the highest point of the landing area. It is often measured in feet and can be found in approach charts of the aerodrome. It is not to be confused with terms such as the altitude or height.
Maps and GIS
GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales. Tools inside the GIS allow for manipulation of data for spatial analysis or cartography.
A topographical map is the main type of map used to depict elevation, often through contour lines.
In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS.
USGS (United States Geologic Survey) is developing a 3D Elevation Program (3DEP) to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the conterminous United States, Hawaii, and the U.S. territories. There are three bare earth DEM layers in 3DEP which are nationally seamless at the resolution of 1/3, 1, and 2 arcseconds.
See also
Amsterdam Ordnance Datum, Normaal Amsterdams Peil (NAP), Dutch vertical datum
Elevation profile
Geodesy
GTOPO30 a digital elevation model for the world
Hypsometric tints
Lapse rate, or the adiabatic lapse rate
List of highest mountains on Earth
List of the highest major summits of North America
Normalhöhennull, German vertical datum, literally: standard elevation zero, (NHN)
North American Vertical Datum of 1988, (NAVD 88)
Sea Level Datum of 1929, a superseded United States vertical datum, (NGVD 29)
Orthometric height
Topographic isolation
Topographic prominence
Vertical pressure variation
References
External links
U.S. National Geodetic Survey website
Geodetic Glossary @ NGS
NGVD 29 to NAVD 88 online elevation converter @ NGS
United States Geological Survey website
Geographical Survey Institute
Downloadable ETOPO2 Raw Data Database (2 minute grid)
Downloadable ETOPO5 Raw Data Database (5 minute grid)
Find the elevation of any place *Path’s Elevation Profile using Google Earth
Geodesy
Topography
Physical geography
Surveying
Geographical terminology in mountaineering
Vertical position | Elevation | [
"Physics",
"Mathematics",
"Engineering"
] | 688 | [
"Vertical position",
"Physical quantities",
"Distance",
"Applied mathematics",
"Surveying",
"Civil engineering",
"Geodesy"
] |
1,765,418 | https://en.wikipedia.org/wiki/Ecosystem%20ecology | Ecosystem ecology is the integrated study of living (biotic) and non-living (abiotic) components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals.
Ecosystem ecology examines physical and biological structures and examines how these ecosystem characteristics interact with each other. Ultimately, this helps us understand how to maintain high quality water and economically viable commodity production. A major focus of ecosystem ecology is on functional processes, ecological mechanisms that maintain the structure and services produced by ecosystems. These include primary productivity (production of biomass), decomposition, and trophic interactions.
Studies of ecosystem function have greatly improved human understanding of sustainable production of forage, fiber, fuel, and provision of water. Functional processes are mediated by regional-to-local level climate, disturbance, and management. Thus ecosystem ecology provides a powerful framework for identifying ecological mechanisms that interact with global environmental problems, especially global warming and degradation of surface water.
This example demonstrates several important aspects of ecosystems:
Ecosystem boundaries are often nebulous and may fluctuate in time
Organisms within ecosystems are dependent on ecosystem level biological and physical processes
Adjacent ecosystems closely interact and often are interdependent for maintenance of community structure and functional processes that maintain productivity and biodiversity
These characteristics also introduce practical problems into natural resource management. Who will manage which ecosystem? Will timber cutting in the forest degrade recreational fishing in the stream? These questions are difficult for land managers to address while the boundary between ecosystems remains unclear; even though decisions in one ecosystem will affect the other. We need better understanding of the interactions and interdependencies of these ecosystems and the processes that maintain them before we can begin to address these questions.
Ecosystem ecology is an inherently interdisciplinary field of study. An individual ecosystem is composed of populations of organisms, interacting within communities, and contributing to the cycling of nutrients and the flow of energy. The ecosystem is the principal unit of study in ecosystem ecology.
Population, community, and physiological ecology provide many of the underlying biological mechanisms influencing ecosystems and the processes they maintain. Flowing of energy and cycling of matter at the ecosystem level are often examined in ecosystem ecology, but, as a whole, this science is defined more by subject matter than by scale. Ecosystem ecology approaches organisms and abiotic pools of energy and nutrients as an integrated system which distinguishes it from associated sciences such as biogeochemistry.
Biogeochemistry and hydrology focus on several fundamental ecosystem processes such as biologically mediated chemical cycling of nutrients and physical-biological cycling of water. Ecosystem ecology forms the mechanistic basis for regional or global processes encompassed by landscape-to-regional hydrology, global biogeochemistry, and earth system science.
History
Ecosystem ecology is philosophically and historically rooted in terrestrial ecology. The ecosystem concept has evolved rapidly during the last 100 years with important ideas developed by Frederic Clements, a botanist who argued for specific definitions of ecosystems and that physiological processes were responsible for their development and persistence. Although most of Clements ecosystem definitions have been greatly revised, initially by Henry Gleason and Arthur Tansley, and later by contemporary ecologists, the idea that physiological processes are fundamental to ecosystem structure and function remains central to ecology.
Later work by Eugene Odum and Howard T. Odum quantified flows of energy and matter at the ecosystem level, thus documenting the general ideas proposed by Clements and his contemporary Charles Elton.
In this model, energy flows through the whole system were dependent on biotic and abiotic interactions of each individual component (species, inorganic pools of nutrients, etc.). Later work demonstrated that these interactions and flows applied to nutrient cycles, changed over the course of succession, and held powerful controls over ecosystem productivity. Transfers of energy and nutrients are innate to ecological systems regardless of whether they are aquatic or terrestrial. Thus, ecosystem ecology has emerged from important biological studies of plants, animals, terrestrial, aquatic, and marine ecosystems.
Ecosystem services
Ecosystem services are ecologically mediated functional processes essential to sustaining healthy human societies. Water provision and filtration, production of biomass in forestry, agriculture, and fisheries, and removal of greenhouse gases such as carbon dioxide (CO2) from the atmosphere are examples of ecosystem services essential to public health and economic opportunity. Nutrient cycling is a process fundamental to agricultural and forest production.
However, like most ecosystem processes, nutrient cycling is not an ecosystem characteristic which can be “dialed” to the most desirable level. Maximizing production in degraded systems is an overly simplistic solution to the complex problems of hunger and economic security. For instance, intensive fertilizer use in the midwestern United States has resulted in degraded fisheries in the Gulf of Mexico. Regrettably, a “Green Revolution” of intensive chemical fertilization has been recommended for agriculture in developed and developing countries. These strategies risk alteration of ecosystem processes that may be difficult to restore, especially when applied at broad scales without adequate assessment of impacts. Ecosystem processes may take many years to recover from significant disturbance.
For instance, large-scale forest clearance in the northeastern United States during the 18th and 19th centuries has altered soil texture, dominant vegetation, and nutrient cycling in ways that impact forest productivity in the present day. An appreciation of the importance of ecosystem function in maintenance of productivity, whether in agriculture or forestry, is needed in conjunction with plans for restoration of essential processes. Improved knowledge of ecosystem function will help to achieve long-term sustainability and stability in the poorest parts of the world.
Operation
Biomass productivity is one of the most apparent and economically important ecosystem functions. Biomass accumulation begins at the cellular level via photosynthesis. Photosynthesis requires water and consequently global patterns of annual biomass production are correlated with annual precipitation. Amounts of productivity are also dependent on the overall capacity of plants to capture sunlight which is directly correlated with plant leaf area and N content.
Net primary productivity (NPP) is the primary measure of biomass accumulation within an ecosystem. Net primary productivity can be calculated by a simple formula where the total amount of productivity is adjusted for total productivity losses through maintenance of biological processes:
NPP = GPP – Rproducer
Where GPP is gross primary productivity and Rproducer is photosynthate (Carbon) lost via cellular respiration.
NPP is difficult to measure but a new technique known as eddy co-variance has shed light on how natural ecosystems influence the atmosphere. Figure 4 shows seasonal and annual changes in CO2 concentration measured at Mauna Loa, Hawaii from 1987 to 1990. CO2 concentration steadily increased, but within-year variation has been greater than the annual increase since measurements began in 1957.
These variations were thought to be due to seasonal uptake of CO2 during summer months. A newly developed technique for assessing ecosystem NPP has confirmed seasonal variation are driven by seasonal changes in CO2 uptake by vegetation. This has led many scientists and policy makers to speculate that ecosystems can be managed to ameliorate problems with global warming. This type of management may include reforesting or altering forest harvest schedules for many parts of the world.
Decomposition and nutrient cycling
Decomposition and nutrient cycling are fundamental to ecosystem biomass production. Most natural ecosystems are nitrogen (N) limited and biomass production is closely correlated with N turnover.
Typically external input of nutrients is very low and efficient recycling of nutrients maintains productivity. Decomposition of plant litter accounts for the majority of nutrients recycled through ecosystems (Figure 3). Rates of plant litter decomposition are highly dependent on litter quality; high concentration of phenolic compounds, especially lignin, in plant litter has a retarding effect on litter decomposition. More complex C compounds are decomposed more slowly and may take many years to completely breakdown. Decomposition is typically described with exponential decay and has been related to the mineral concentrations, especially manganese, in the leaf litter.
Globally, rates of decomposition are mediated by litter quality and climate. Ecosystems dominated by plants with low-lignin concentration often have rapid rates of decomposition and nutrient cycling (Chapin et al. 1982). Simple carbon (C) containing compounds are preferentially metabolized by decomposer microorganisms which results in rapid initial rates of decomposition, see Figure 5A, models that depend on constant rates of decay; so called “k” values, see Figure 5B. In addition to litter quality and climate, the activity of soil fauna is very important
However, these models do not reflect simultaneous linear and non-linear decay processes which likely occur during decomposition. For instance, proteins, sugars and lipids decompose exponentially, but lignin decays at a more linear rate Thus, litter decay is inaccurately predicted by simplistic models.
A simple alternative model presented in Figure 5C shows significantly more rapid decomposition that the standard model of figure 4B. Better understanding of decomposition models is an important research area of ecosystem ecology because this process is closely tied to nutrient supply and the overall capacity of ecosystems to sequester CO2 from the atmosphere.
Trophic dynamics
Trophic dynamics refers to process of energy and nutrient transfer between organisms. Trophic dynamics is an important part of the structure and function of ecosystems. Figure 3 shows energy transferred for an ecosystem at Silver Springs, Florida. Energy gained by primary producers (plants, P) is consumed by herbivores (H), which are consumed by carnivores (C), which are themselves consumed by “top- carnivores”(TC).
One of the most obvious patterns in Figure 3 is that as one moves up to higher trophic levels (i.e. from plants to top-carnivores) the total amount of energy decreases. Plants exert a “bottom-up” control on the energy structure of ecosystems by determining the total amount of energy that enters the system.
However, predators can also influence the structure of lower trophic levels from the top-down. These influences can dramatically shift dominant species in terrestrial and marine systems The interplay and relative strength of top-down vs. bottom-up controls on ecosystem structure and function is an important area of research in the greater field of ecology.
Trophic dynamics can strongly influence rates of decomposition and nutrient cycling in time and in space. For example, herbivory can increase litter decomposition and nutrient cycling via direct changes in litter quality and altered dominant vegetation. Insect herbivory has been shown to increase rates of decomposition and nutrient turnover due to changes in litter quality and increased frass inputs.
However, insect outbreak does not always increase nutrient cycling. Stadler showed that C rich honeydew produced during aphid outbreak can result in increased N immobilization by soil microbes thus slowing down nutrient cycling and potentially limiting biomass production. North atlantic marine ecosystems have been greatly altered by overfishing of cod. Cod stocks crashed in the 1990s which resulted in increases in their prey such as shrimp and snow crab Human intervention in ecosystems has resulted in dramatic changes to ecosystem structure and function. These changes are occurring rapidly and have unknown consequences for economic security and human well-being.
Applications and importance
Lessons from two Central American cities
The biosphere has been greatly altered by the demands of human societies. Ecosystem ecology plays an important role in understanding and adapting to the most pressing current environmental problems. Restoration ecology and ecosystem management are closely associated with ecosystem ecology. Restoring highly degraded resources depends on integration of functional mechanisms of ecosystems.
Without these functions intact, economic value of ecosystems is greatly reduced and potentially dangerous conditions may develop in the field. For example, areas within the mountainous western highlands of Guatemala are more susceptible to catastrophic landslides and crippling seasonal water shortages due to loss of forest resources. In contrast, cities such as Totonicapán that have preserved forests through strong social institutions have greater local economic stability and overall greater human well-being.
This situation is striking considering that these areas are close to each other, the majority of inhabitants are of Mayan descent, and the topography and overall resources are similar. This is a case of two groups of people managing resources in fundamentally different ways. Ecosystem ecology provides the basic science needed to avoid degradation and to restore ecosystem processes that provide for basic human needs.
See also
Biogeochemistry
Community ecology
Earth system science
Holon (philosophy)
Landscape ecology
Systems ecology
MuSIASEM
References
Systems ecology
Global natural environment
Ecological processes
Ecosystems | Ecosystem ecology | [
"Physics",
"Biology",
"Environmental_science"
] | 2,503 | [
"Physical phenomena",
"Symbiosis",
"Earth phenomena",
"Systems ecology",
"Ecological processes",
"Ecosystems",
"Environmental social science"
] |
1,765,718 | https://en.wikipedia.org/wiki/Kirpan | The kirpan (; pronunciation: [kɪɾpaːn]) is a blade that Khalsa Sikhs are required to wear as part of their religious uniform, as prescribed by the Sikh Code of Conduct. Traditionally, the kirpan was a full-sized talwar sword around 76 cm (30 inches) in length; however, British colonial policies and laws introduced in the 19th century reduced the length of the blade, and in the modern day, the kirpan is typically manifested as a dagger or knife. According to the Sikh Code of Conduct, "The length of the sword to be worn is not prescribed". It is part of a religious commandment given by Guru Gobind Singh in 1699, founding the Khalsa order and introducing the five articles of faith (the five Ks) which must be worn at all times.
Etymology
The Punjabi word ਕਿਰਪਾਨ, kirpān, has a folk etymology with two roots: kirpa, meaning "mercy", "grace", "compassion" or "kindness"; and aanaa, meaning "honor", "grace" or "dignity". It is derived from or related to Sanskrit कृपाण (kṛpāṇa, “sword, dagger, sacrificial knife”), ultimately from the Proto-Indo-European stem *kerp-, from *(s)ker, meaning "to cut".
Purpose
Sikhs are expected to embody the qualities of a Sant Sipahi or "saint-soldier", showing no fear on the battlefield and treating defeated enemies humanely. The Bhagat further defines the qualities of a sant sipahi as one who is "truly brave...who fights for the deprived".
Kirpans are curved and have a single cutting edge that can be sharp or blunt, which is up to the religious convictions of the wearer. They vary in size and a Sikh who has undergone the Amrit Sanskar ceremony of initiation may carry more than one; the Kirpans must be made of steel or iron.
Symbolism
The Kirpan represents bhagauti, meaning "primal divine power".
History
Sikhism was founded in the 15th century in the Punjab region of Early-Modern India. At the time of its founding, this culturally rich region was governed by the Mughal Empire. During the time of the founder of the Sikh faith and its first guru, Guru Nanak, Sikhism flourished as a counter to both the prevalent Hindu and Muslim teachings. The Mughal emperor Akbar focused on religious tolerance. His relationship with the Sikh Gurus was cordial.
The relationship between the Sikhs and Akbar's successor Jahangir was not friendly. Later Mughal rulers reinstated shari'a traditions of jizya, a poll tax on non-Muslims. The Guru Arjan Dev, the fifth guru, refused to remove references to Muslim and Hindu teachings in the Adi Granth and was summoned and executed.
This incident is seen as a turning point in Sikh history, leading to the first instance of militarization of Sikhs under Guru Arjan Dev's son Guru Hargobind. Guru Arjan Dev explained to the five Sikhs who accompanied him to Lahore, that Guru Hargobind has to build a defensive army to protect the people. Guru Hargobind trained in shashtar vidya, a form of martial arts that became prevalent among the Sikhs. He first conceptualized the idea of the kirpan through the notion of Sant Sipahi, or "saint soldiers".
The relationship between the Sikhs and the Mughals further deteriorated following the execution of the ninth Guru Tegh Bahadur by Aurengzeb, who was highly intolerant of Sikhs, partially driven by his desire to impose Islamic law. Following the executions of their leaders and facing increasing persecution, the Sikhs officially adopted militarization for self-protection by creating later on the Khalsa; the executions also prompted formalization of various aspects of the Sikh faith. The tenth and final guru, Guru Gobind Singh formally included the kirpan as a mandatory article of faith for all baptised Sikhs, making it a duty for Sikhs to be able to defend the needy, suppressed ones, to defend righteousness and the freedom of expression.
Legality
In modern times there has been debate about allowing Sikhs to carry a kirpan that falls under prohibitions on bladed weapons, with some countries allowing Sikhs a dispensation.
Other issues not strictly of legality arise, such as whether to allow carrying of kirpans on commercial aircraft or into areas where security is enforced.
Australia
In May 2021, the state of New South Wales imposed a ban on bringing any knives, including kirpans, onto school grounds after a 14-year-old boy allegedly stabbed a 16-year-old boy with his kirpan in a school in Sydney's north-west on 6 May. After members of Sydney's Sikh community spoke out and defended their children's rights to bring religious items to school, the state's Department of Education reversed this decision in August 2021 and implemented new guidelines around the bringing of kirpans with the following conditions:
Kirpans must be smaller than in length and must have no sharp points or edges
Kirpans must only be worn under clothing
Kirpans must be removed during sports
In August 2023, the state of Queensland repealed a previous ban on bringing knives to schools and other public places after Australian Sikh Kamaljit Kaur Athwal took the Queensland state government to court in 2022. The Supreme Court of Queensland found that the ban, which was stated in section 55 of the Weapons Act 1990 (Qld), contravened the Racial Discrimination Act 1975 (Cth).
Belgium
On 12 October 2009, the Antwerp court of appeal declared carrying a kirpan a religious symbol, overturning a €550 fine from a lower court for "carrying a freely accessible weapon without demonstrating a legitimate reason".
Canada
In most public places in Canada a kirpan is allowed, although there have been some court cases regarding carrying on school premises. In the 2006 Supreme Court of Canada decision of Multani v. Commission scolaire Marguerite‑Bourgeoys the court held that the banning of the kirpan in a school environment offended Canada's Charter of Rights and Freedoms, and that the restriction could not be upheld under s. 1 of the Charter, as per R. v. Oakes. The issue started when a 12-year-old schoolboy dropped a 20 cm (8-inch) long kirpan in school. School staff and parents were very concerned, and the student was required to attend school under police supervision until the court decision was reached. A student is allowed to have a kirpan on his person if it is sealed and secured.
In September 2008, Montreal police announced that a 13-year-old student was to be charged after he allegedly threatened another student with his kirpan. The court found the student not guilty of assault with the kirpan, but guilty of threatening his schoolmates, and he was granted an absolute discharge on 15 April 2009.
On 9 February 2011, the National Assembly of Quebec unanimously voted to ban kirpans from the provincial parliament buildings. However, despite opposition from the Bloc Québécois, it was voted that the kirpan be allowed in federal parliamentary buildings.
As of 27 November 2017, Transport Canada has updated its Prohibited Items list to allow Sikhs to wear kirpans smaller than in length on all domestic and international flights (except to the United States).
Today, many Khalsa Sikhs in Canada freely wear their kirpans in public. An example of this is Canadian politician Jagmeet Singh, who wears his kirpan.
Denmark
On 24 October 2006, the Eastern High Court of Denmark upheld the earlier ruling of the Copenhagen City Court that the wearing of a kirpan by a Sikh was illegal, becoming the first country in the world to pass such a ruling. Ripudaman Singh, who now works as a scientist, was earlier convicted by the City Court of breaking the law by publicly carrying a knife. He was sentenced to a 3,000 kroner fine or six days' imprisonment. Though the High Court quashed this sentence, it held that the carrying of a kirpan by a Sikh broke the law. The judge stated that "after all the information about the accused, the reason for the accused to possess a knife and the other circumstances of the case, such exceptional extenuating circumstances are found, that the punishment should be dropped, cf. Penal Code § 83, 2nd period."
Danish law allows carrying of knives (longer than 6 centimeters and non-foldable) in public places if it is for any purpose recognized as valid, including work-related, recreation, etc. The High Court did not find religion to be a valid reason for carrying a knife. It stated that "for these reasons, as stated by the City Court, it is agreed that the circumstance of the accused carrying the knife as a Sikh, cannot be regarded as a similarly recognisable purpose, included in the decision for the exceptions in weapon law § 4, par. 1, 1st period, second part."
India
Sikhism originated in the Indian subcontinent during the Mughal era and a majority of the Sikh population lives in present-day India, where they form around 2% of its population.
Article 25 of the Indian Constitution deems the carrying of a kirpan by Sikhs to be included in the profession of the Sikh religion and not illegal. Sikhs are allowed to carry the kirpan on board domestic flights in India.
Italy
In 2015 an amritdhari Sikh was fined in the Lombard town of Goito, in Mantua province for carrying a kirpan. In 2017 Italy's higher appeal court, the Corte di Cassazione upheld the fine. Media reports have interpreted the sentence as instituting a generalized ban on the kirpan. Amritsar Lok Sabha MP Gurjeet Singh Aulja met with Italian diplomats and was assured no generalized ban on kirpans is operative, and that the case had only specific relevance to a singular instance and carried no general applicability.
Sweden
Swedish law has a ban on "street weapons" in public places that includes knives unless used for recreation (for instance fishing) or profession (for instance a carpenter). Carrying some smaller knives, typically folding pocket knives, is allowed, so that smaller kirpans may be within the law.
United Kingdom
England and Wales
As a bladed article, possession of a kirpan without valid reason in a public place would be illegal under section 139 of the Criminal Justice Act 1988. However, there is a specific defence for a person charged to prove that he carries it for "religious reasons". There is an identical defence to the similar offence (section 139A) which relates to carrying bladed articles on school grounds. The official list of prohibited items at the London 2012 Summer Olympics venues prohibited all kinds of weapons, but explicitly allowed the kirpan.
Scotland
Similar provisions exist in Scots law with section 49 of the Criminal Law (Consolidation) (Scotland) Act 1995 making it an offence to possess a bladed or pointed article in a public place. A defence exists under s.49(5)(b) of the act for pointed or bladed articles carried for religious reasons. Section 49A of the same act creates the offence of possessing a bladed or pointed article in a school, with s.49A(4)(c) again creating a defence when the article is carried for religious reasons.
United States
In 1994, the Ninth Circuit held that Sikh students in public school have a right to wear the kirpan. State courts in New York and Ohio have ruled in favor of Sikhs who faced the rare situation of prosecution under anti-weapons statutes for wearing kirpans, "because of the kirpan's religious nature and Sikhs' benign intent in wearing them." In New York City, a compromise was reached with the Board of Education whereby the wearing of the knives was allowed so long as they were secured within the sheaths with adhesives and made impossible to draw. The tightening of air travel security in the 21st century has caused problems for Sikhs carrying kirpans at airports and other checkpoints. As of 2016, the TSA explicitly prohibits the carrying of "religious knives and swords" on one's person or in cabin baggage and requires that they be packed in checked baggage.
In 2008, American Sikh leaders chose not to attend an interfaith meeting with Pope Benedict XVI at the Pope John Paul II Cultural Center in Washington, D.C., because the United States Secret Service would have required them to leave behind the kirpan. The secretary general of the Sikh Council stated: "We have to respect the sanctity of the kirpan, especially in such interreligious gatherings. We cannot undermine the rights and freedoms of religion in the name of security." A spokesman for the Secret Service stated: "We understand the kirpan is a sanctified religious object. But by definition, it's still a weapon. We apply our security policy consistently and fairly."
See also
Gatka
Sant Sipahi
References
External links
Explaining what the Kirpan is to a Non-Sikh.
Press release VDPA Human Rights Conference, Vienna, Austria
Sword in Sikhism
About the 5 K´s
Indian martial arts
Sikh religious clothing
Ceremonial knives
Punjabi words and phrases
Daggers
Edged and bladed weapons
Religious objects
Weapons of India | Kirpan | [
"Physics"
] | 2,759 | [
"Religious objects",
"Physical objects",
"Matter"
] |
1,765,791 | https://en.wikipedia.org/wiki/Orgueil%20%28meteorite%29 | Orgueil is a scientifically important carbonaceous chondrite meteorite that fell in southwestern France in 1864.
History
The Orgueil meteorite fell on May 14, 1864, a few minutes after 20:00 local time, near Orgueil in southern France. About 20 stones fell over an area of 5-10 square kilometres. A specimen of the meteorite was analyzed that same year by François Stanislaus Clöez, professor of chemistry at the Musée d'Histoire Naturelle, who focused on the organic matter found in this meteorite. He wrote that it contained carbon, hydrogen, and oxygen, and its composition was very similar to peat from the Somme valley or to the lignite of Ringkohl near Kassel. An intense scientific discussion ensued, continuing into the 1870s, as to whether the organic matter might have a biological origin.
Curation and Distribution
Orgueil specimens are in curation by bodies around the world. Given the large mass, samples are in circulation for nondestructive (and with sufficient justification, destructive) study and test.
Source: Grady, M. M. Catalogue of Meteorites, 5th Edition, Cambridge University Press
Composition and classification
Orgueil is one of five known meteorites belonging to the CI chondrite group (see meteorites classification), and is the largest (). This group has a composition that is essentially identical to that of the sun, excluding gaseous elements like hydrogen and helium. Notably though, the Orgueil meteor is highly enriched in (volatile) mercury - undetectable in the solar photosphere, and this is a major driver of the "mercury paradox" that mercury abundances in meteors do not follow its volatile nature and isotopic ratios based expected behaviour in the solar nebula.
Because of its extraordinarily primitive composition and relatively large mass, Orgueil is one of the most-studied meteorites. One notable discovery in Orgueil was a high concentration of isotopically anomalous xenon called "xenon-HL". The carrier of this gas is extremely fine-grained diamond dust that is older than the Solar System itself, known as presolar grains.
In 1962, Nagy et al. announced the discovery of 'organised elements' embedded in the Orgueil meteorite that were purportedly biological structures of extraterrestrial origin. These elements were subsequently shown to be either pollen (including that of ragwort) and fungal spores (Fitch & Anders, 1963) that had contaminated the sample, or crystals of the mineral olivine.
Seed capsule hoax
In 1965, a fragment of the Orgueil meteorite, kept in a sealed glass jar in Montauban since its discovery, was found to have a seed capsule embedded in it, whilst the original glassy layer on the outside remained apparently undisturbed. Despite great initial excitement, the seed capsule was shown to be that of a European rush, glued into the fragment and camouflaged using coal dust. The outer "fusion layer" was in fact glue. Whilst the perpetrator is unknown, it is thought that the hoax was aimed at influencing 19th century debate on spontaneous generation by demonstrating the transformation of inorganic to biological matter.
Claim of fossils
Richard B. Hoover of NASA has claimed that the Orgueil meteorite contains fossils, some of which are similar to known terrestrial species. Hoover has previously claimed the existence of fossils in the Murchison meteorite. NASA has formally distanced itself from Hoover's claims and his lack of expert peer-reviews.
See also
Glossary of meteoritics
References
Further reading
Nagy B, Claus G, Hennessy DJ (1962) Organic Particles Embedded in Minerals in Orgueil and Ivuna Carbonaceous Chondrites. Nature 193 (4821) p. 1129
Fitch FW, Anders E (1963) Organized Element - Possible Identification in Orgueil Meteorite. Science 140 (357) p. 1097
Gilmour I, Wright I, Wright J 'Origins of Earth and Life', The Open University, 1997,
External links
The Orgueil meteorite from The Encyclopedia of Astrobiology, Astronomy, and Spaceflight
The Orgueil meteorite hoax
Meteorites found in France
Astrobiology
Hoaxes in science
Hoaxes in France
1864 in France
1860s in science | Orgueil (meteorite) | [
"Astronomy",
"Biology"
] | 877 | [
"Origin of life",
"Speculative evolution",
"Astrobiology",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
1,765,852 | https://en.wikipedia.org/wiki/Matrix%20calculus | In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section.
Scope
Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the dependent variable can be any of these as well. Each different situation will lead to a different set of rules, or a separate calculus, using the broader sense of the term. Matrix notation serves as a convenient way to collect the many derivatives in an organized way.
As a first example, consider the gradient from vector calculus. For a scalar function of three independent variables, , the gradient is given by the vector equation
where represents a unit vector in the direction for . This type of generalized derivative can be seen as the derivative of a scalar, f, with respect to a vector, , and its result can be easily collected in vector form.
More complicated examples include the derivative of a scalar function with respect to a matrix, known as the gradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. As another example, if we have an -vector of dependent variables, or functions, of independent variables we might consider the derivative of the dependent vector with respect to the independent vector. The result could be collected in an matrix consisting of all of the possible derivative combinations.
There are a total of nine possibilities using scalars, vectors, and matrices. Notice that as we consider higher numbers of components in each of the independent and dependent variables we can be left with a very large number of possibilities. The six kinds of derivatives that can be most neatly organized in matrix form are collected in the following table.
Here, we have used the term "matrix" in its most general sense, recognizing that vectors are simply matrices with one column (and scalars are simply vectors with one row). Moreover, we have used bold letters to indicate vectors and bold capital letters for matrices. This notation is used throughout.
Notice that we could also talk about the derivative of a vector with respect to a matrix, or any of the other unfilled cells in our table. However, these derivatives are most naturally organized in a tensor of rank higher than 2, so that they do not fit neatly into a matrix. In the following three sections we will define each one of these derivatives and relate them to other branches of mathematics. See the layout conventions section for a more detailed table.
Relation to other derivatives
The matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. The Fréchet derivative is the standard way in the setting of functional analysis to take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fréchet differentiable, the two derivatives will agree up to translation of notations. As is the case in general for partial derivatives, some formulae may extend under weaker analytic conditions than the existence of the derivative as approximating linear mapping.
Usages
Matrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:
Kalman filter
Wiener filter
Expectation-maximization algorithm for Gaussian mixture
Gradient descent
Notation
The vector and matrix derivatives presented in the sections to follow take full advantage of matrix notation, using a single variable to represent a large number of variables. In what follows we will distinguish scalars, vectors and matrices by their typeface. We will let denote the space of real matrices with rows and columns. Such matrices will be denoted using bold capital letters: , , , etc. An element of , that is, a column vector, is denoted with a boldface lowercase letter: , , , etc. An element of is a scalar, denoted with lowercase italic typeface: , , , etc. denotes matrix transpose, is the trace, and or is the determinant. All functions are assumed to be of differentiability class unless otherwise noted. Generally letters from the first half of the alphabet (a, b, c, ...) will be used to denote constants, and from the second half (t, x, y, ...) to denote variables.
NOTE: As mentioned above, there are competing notations for laying out systems of partial derivatives in vectors and matrices, and no standard appears to be emerging yet. The next two introductory sections use the numerator layout convention simply for the purposes of convenience, to avoid overly complicating the discussion. The section after them discusses layout conventions in more detail. It is important to realize the following:
Despite the use of the terms "numerator layout" and "denominator layout", there are actually more than two possible notational choices involved. The reason is that the choice of numerator vs. denominator (or in some situations, numerator vs. mixed) can be made independently for scalar-by-vector, vector-by-scalar, vector-by-vector, and scalar-by-matrix derivatives, and a number of authors mix and match their layout choices in various ways.
The choice of numerator layout in the introductory sections below does not imply that this is the "correct" or "superior" choice. There are advantages and disadvantages to the various layout types. Serious mistakes can result from carelessly combining formulas written in different layouts, and converting from one layout to another requires care to avoid errors. As a result, when working with existing formulas the best policy is probably to identify whichever layout is used and maintain consistency with it, rather than attempting to use the same layout in all situations.
Alternatives
The tensor index notation with its Einstein summation convention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. All of the work here can be done in this notation without use of the single-variable matrix notation. However, many problems in estimation theory and other areas of applied mathematics would result in too many indices to properly keep track of, pointing in favor of matrix calculus in those areas. Also, Einstein notation can be very useful in proving the identities presented here (see section on differentiation) as an alternative to typical element notation, which can become cumbersome when the explicit sums are carried around. Note that a matrix can be considered a tensor of rank two.
Derivatives with vectors
Because vectors are matrices with only one column, the simplest matrix derivatives are vector derivatives.
The notations developed here can accommodate the usual operations of vector calculus by identifying the space of -vectors with the Euclidean space , and the scalar is identified with . The corresponding concept from vector calculus is indicated at the end of each subsection.
NOTE: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions.
Vector-by-scalar
The derivative of a vector , by a scalar is written (in numerator layout notation) as
In vector calculus the derivative of a vector with respect to a scalar is known as the tangent vector of the vector , . Notice here that .
Example Simple examples of this include the velocity vector in Euclidean space, which is the tangent vector of the position vector (considered as a function of time). Also, the acceleration is the tangent vector of the velocity.
Scalar-by-vector
The derivative of a scalar by a vector , is written (in numerator layout notation) as
In vector calculus, the gradient of a scalar field (whose independent coordinates are the components of ) is the transpose of the derivative of a scalar by a vector.
By example, in physics, the electric field is the negative vector gradient of the electric potential.
The directional derivative of a scalar function of the space vector in the direction of the unit vector (represented in this case as a column vector) is defined using the gradient as follows.
Using the notation just defined for the derivative of a scalar with respect to a vector we can re-write the directional derivative as
This type of notation will be nice when proving product rules and chain rules that come out looking similar to what we are familiar with for the scalar derivative.
Vector-by-vector
Each of the previous two cases can be considered as an application of the derivative of a vector with respect to a vector, using a vector of size one appropriately. Similarly we will find that the derivatives involving matrices will reduce to derivatives involving vectors in a corresponding way.
The derivative of a vector function (a vector whose components are functions) , with respect to an input vector, , is written (in numerator layout notation) as
In vector calculus, the derivative of a vector function with respect to a vector whose components represent a space is known as the pushforward (or differential), or the Jacobian matrix.
The pushforward along a vector function with respect to vector in is given by
Derivatives with matrices
There are two types of derivatives with matrices that can be organized into a matrix of the same size. These are the derivative of a matrix by a scalar and the derivative of a scalar by a matrix. These can be useful in minimization problems found in many areas of applied mathematics and have adopted the names tangent matrix and gradient matrix respectively after their analogs for vectors.
Note: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions.
Matrix-by-scalar
The derivative of a matrix function by a scalar is known as the tangent matrix and is given (in numerator layout notation) by
Scalar-by-matrix
The derivative of a scalar function , with respect to a matrix of independent variables, is given (in numerator layout notation) by
Important examples of scalar functions of matrices include the trace of a matrix and the determinant.
In analog with vector calculus this derivative is often written as the following.
Also in analog with vector calculus, the directional derivative of a scalar of a matrix in the direction of matrix is given by
It is the gradient matrix, in particular, that finds many uses in minimization problems in estimation theory, particularly in the derivation of the Kalman filter algorithm, which is of great importance in the field.
Other matrix derivatives
The three types of derivatives that have not been considered are those involving vectors-by-matrices, matrices-by-vectors, and matrices-by-matrices. These are not as widely considered and a notation is not widely agreed upon.
Layout conventions
This section discusses the similarities and differences between notational conventions that are used in the various fields that take advantage of matrix calculus. Although there are largely two consistent conventions, some authors find it convenient to mix the two conventions in forms that are discussed below. After this section, equations will be listed in both competing forms separately.
The fundamental issue is that the derivative of a vector with respect to a vector, i.e. , is often written in two competing ways. If the numerator is of size and the denominator of size n, then the result can be laid out as either an matrix or matrix, i.e. the elements of laid out in rows and the elements of laid out in columns, or vice versa. This leads to the following possibilities:
Numerator layout, i.e. lay out according to and (i.e. contrarily to ). This is sometimes known as the Jacobian formulation. This corresponds to the layout in the previous example, which means that the row number of equals to the size of the numerator and the column number of equals to the size of .
Denominator layout, i.e. lay out according to and (i.e. contrarily to y). This is sometimes known as the Hessian formulation. Some authors term this layout the gradient, in distinction to the Jacobian (numerator layout), which is its transpose. (However, gradient more commonly means the derivative regardless of layout.). This corresponds to the n×m layout in the previous example, which means that the row number of equals to the size of (the denominator).
A third possibility sometimes seen is to insist on writing the derivative as (i.e. the derivative is taken with respect to the transpose of ) and follow the numerator layout. This makes it possible to claim that the matrix is laid out according to both numerator and denominator. In practice this produces results the same as the numerator layout.
When handling the gradient and the opposite case we have the same issues. To be consistent, we should do one of the following:
If we choose numerator layout for we should lay out the gradient as a row vector, and as a column vector.
If we choose denominator layout for we should lay out the gradient as a column vector, and as a row vector.
In the third possibility above, we write and and use numerator layout.
Not all math textbooks and papers are consistent in this respect throughout. That is, sometimes different conventions are used in different contexts within the same book or paper. For example, some choose denominator layout for gradients (laying them out as column vectors), but numerator layout for the vector-by-vector derivative
Similarly, when it comes to scalar-by-matrix derivatives and matrix-by-scalar derivatives then consistent numerator layout lays out according to and , while consistent denominator layout lays out according to and . In practice, however, following a denominator layout for and laying the result out according to , is rarely seen because it makes for ugly formulas that do not correspond to the scalar formulas. As a result, the following layouts can often be found:
Consistent numerator layout, which lays out according to and according to .
Mixed layout, which lays out according to and according to .
Use the notation with results the same as consistent numerator layout.
In the following formulas, we handle the five possible combinations and separately. We also handle cases of scalar-by-scalar derivatives that involve an intermediate vector or matrix. (This can arise, for example, if a multi-dimensional parametric curve is defined in terms of a scalar variable, and then a derivative of a scalar function of the curve is taken with respect to the scalar that parameterizes the curve.) For each of the various combinations, we give numerator-layout and denominator-layout results, except in the cases above where denominator layout rarely occurs. In cases involving matrices where it makes sense, we give numerator-layout and mixed-layout results. As noted above, cases where vector and matrix denominators are written in transpose notation are equivalent to numerator layout with the denominators written without the transpose.
Keep in mind that various authors use different combinations of numerator and denominator layouts for different types of derivatives, and there is no guarantee that an author will consistently use either numerator or denominator layout for all types. Match up the formulas below with those quoted in the source to determine the layout used for that particular type of derivative, but be careful not to assume that derivatives of other types necessarily follow the same kind of layout.
When taking derivatives with an aggregate (vector or matrix) denominator in order to find a maximum or minimum of the aggregate, it should be kept in mind that using numerator layout will produce results that are transposed with respect to the aggregate. For example, in attempting to find the maximum likelihood estimate of a multivariate normal distribution using matrix calculus, if the domain is a k×1 column vector, then the result using the numerator layout will be in the form of a 1×k row vector. Thus, either the results should be transposed at the end or the denominator layout (or mixed layout) should be used.
{|class="wikitable"
|+ Result of differentiating various kinds of aggregates with other kinds of aggregates
! colspan=2 rowspan=2 |
! colspan=2 | Scalar
! colspan=2 | Column vector (size )
! colspan=2 | Matrix (size )
|-
! Notation !! Type
! Notation !! Type
! Notation !! Type
|-
! rowspan=2 | Scalar
! Numerator
| rowspan=2 style="text-align:center;" |
| rowspan=2 | Scalar
| rowspan=2 style="text-align:center;" |
| Size- column vector
| rowspan=2 style="text-align:center;" |
| matrix
|-
! Denominator
| Size-m row vector
|
|-
! rowspan=2 | Column vector (size )
! Numerator
| rowspan=2 style="text-align:center;" |
| Size- row vector
| rowspan=2 style="text-align:center;" |
| matrix
| rowspan=2 style="text-align:center;" |
| rowspan=2 |
|-
! Denominator
| Size- column vector
| matrix
|-
! rowspan=2 | Matrix (size )
! Numerator
| rowspan=2 style="text-align:center;" |
| matrix
| rowspan=2 style="text-align:center;" |
| rowspan=2 |
| rowspan=2 style="text-align:center;" |
| rowspan=2 |
|-
! Denominator
| matrix
|}
The results of operations will be transposed when switching between numerator-layout and denominator-layout notation.
Numerator-layout notation
Using numerator-layout notation, we have:
The following definitions are only provided in numerator-layout notation:
Denominator-layout notation
Using denominator-layout notation, we have:
Identities
As noted above, in general, the results of operations will be transposed when switching between numerator-layout and denominator-layout notation.
To help make sense of all the identities below, keep in mind the most important rules: the chain rule, product rule and sum rule. The sum rule applies universally, and the product rule applies in most of the cases below, provided that the order of matrix products is maintained, since matrix products are not commutative. The chain rule applies in some of the cases, but unfortunately does not apply in matrix-by-scalar derivatives or scalar-by-matrix derivatives (in the latter case, mostly involving the trace operator applied to matrices). In the latter case, the product rule can't quite be applied directly, either, but the equivalent can be done with a bit more work using the differential identities.
The following identities adopt the following conventions:
the scalars, , , , , and are constant in respect of, and the scalars, , and are functions of one of , , or ;
the vectors, , , , , and are constant in respect of, and the vectors, , and are functions of one of , , or ;
the matrices, , , , , and are constant in respect of, and the matrices, and are functions of one of , , or .
Vector-by-vector identities
This is presented first because all of the operations that apply to vector-by-vector differentiation apply directly to vector-by-scalar or scalar-by-vector differentiation simply by reducing the appropriate vector in the numerator or denominator to a scalar.
{|class="wikitable" style="text-align: center;"
|+ Identities: vector-by-vector
! scope="col" width="150" | Condition
! scope="col" width="10" | Expression
! scope="col" width="100" | Numerator layout, i.e. by and
! scope="col" width="100" | Denominator layout, i.e. by and
|-
| is not a function of || ||colspan=2|
|-
| || || colspan=2|
|-
| is not a function of || || ||
|-
| is not a function of || || ||
|-
| is not a function of , ||
| colspan=2|
|-
| , is not a function of || || ||
|-
|, || || ||
|-
| is not a function of , || || ||
|-
| , ||
| colspan=2|
|-
| || || ||
|-
| || || ||
|}
Scalar-by-vector identities
The fundamental identities are placed above the thick black line.
{|class="wikitable" style="text-align: center;"
|+ Identities: scalar-by-vector
! scope="col" width="150" | Condition
! scope="col" width="200" | Expression
! scope="col" width="200" | Numerator layout,i.e. by ; result is row vector
! scope="col" width="200" | Denominator layout,i.e. by ; result is column vector
|-
| is not a function of ||
| ||
|-
| is not a function of , ||
| colspan=2|
|-
| , ||
| colspan=2|
|-
| , ||
| colspan=2|
|-
| ||
| colspan=2|
|-
| ||
| colspan=2|
|-
| ,
|
|
in numerator layout
|
in denominator layout
|-
| , , is not a function of
|
|
in numerator layout
|
in denominator layout
|-
|
|
|
| , the Hessian matrix
|- style="border-top: 3px solid;"
| is not a function of || || ||
|-
| is not a function of is not a function of || || ||
|-
| is not a function of || || ||
|-
| is not a function of is symmetric || || ||
|-
| is not a function of || || colspan=2|
|-
| is not a function of is symmetric || || colspan=2|
|-
| || || ||
|-
| is not a function of ,
|
|
in numerator layout
|
in denominator layout
|-
| , are not functions of || || ||
|-
| , , , , are not functions of || || ||
|-
| is not a function of || ||||
|}
Vector-by-scalar identities
{|class="wikitable" style="text-align: center;"
|+ Identities: vector-by-scalar
! scope="col" width="150" | Condition
! scope="col" width="100" | Expression
! scope="col" width="100" | Numerator layout, i.e. by ,result is column vector
! scope="col" width="100" | Denominator layout, i.e. by ,result is row vector
|-
| is not a function of || ||colspan=2|
|-
| is not a function of , ||
| colspan=2|
|-
| is not a function of x, || || ||
|-
| ||
| colspan=2|
|-
| , ||
| colspan=2|
|-
| , ||
||
||
|-
| rowspan=2| || rowspan=2| || ||
|-
|colspan=2|Assumes consistent matrix layout; see below.
|-
| rowspan=2| || rowspan=2| || ||
|-
|colspan=2|Assumes consistent matrix layout; see below.
|-
| , ||
||
||
|}
NOTE: The formulas involving the vector-by-vector derivatives and (whose outputs are matrices) assume the matrices are laid out consistent with the vector layout, i.e. numerator-layout matrix when numerator-layout vector and vice versa; otherwise, transpose the vector-by-vector derivatives.
Scalar-by-matrix identities
Note that exact equivalents of the scalar product rule and chain rule do not exist when applied to matrix-valued functions of matrices. However, the product rule of this sort does apply to the differential form (see below), and this is the way to derive many of the identities below involving the trace function, combined with the fact that the trace function allows transposing and cyclic permutation, i.e.:
For example, to compute
Therefore,
(numerator layout)
(denominator layout)
(For the last step, see the Conversion from differential to derivative form section.)
{|class="wikitable" style="text-align: center;"
|+ Identities: scalar-by-matrix
! scope="col" width="175" | Condition
! scope="col" width="10" | Expression
! scope="col" width="100" | Numerator layout, i.e. by
! scope="col" width="100" | Denominator layout, i.e. by
|-
| is not a function of ||
| ||
|-
| is not a function of , ||
| colspan=2|
|-
| , ||
| colspan=2|
|-
| , ||
| colspan=2|
|-
| ||
| colspan=2|
|-
| ||
| colspan=2|
|-
| rowspan=2| || rowspan=2|
||
||
|-
| colspan=2|Both forms assume numerator layout for
i.e. mixed layout if denominator layout for is being used.
|- style="border-top: 3px solid;"
| and are not functions of ||
|||
|-
| and are not functions of ||
|||
|-
| and are not functions of , is a real-valued differentiable function
|
|
|
|-
| , and are not functions of ||
|||
|-
| , and are not functions of ||
|||
|- style="border-top: 3px solid;"
| || || colspan="2" |
|-
| , || || colspan="2" |
|-
| is not a function of ,|| || colspan="2" |
|-
| is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. , , , , etc. using a Taylor series); is the equivalent scalar function, is its derivative, and is the corresponding matrix function || || ||
|-
| is not a function of || || ||
|-
| is not a function of || || ||
|-
| is not a function of || || ||
|-
| is not a function of || || ||
|-
| , are not functions of || || ||
|-
| , , are not functions of || || ||
|-
| is a positive integer || || ||
|-
| is not a function of , is a positive integer || || ||
|-
| || || ||
|-
| || || ||
|- style="border-top: 3px solid;"
| || || ||
|-
| is not a function of ||
|| ||
|-
| , are not functions of || || ||
|-
| is a positive integer || || ||
|-
| (see pseudo-inverse) || || ||
|-
| (see pseudo-inverse) || || ||
|-
| is not a function of , is square and invertible || || ||
|-
| is not a function of , is non-square, is symmetric || || ||
|-
| is not a function of , is non-square, is non-symmetric ||
|
|
|}
Matrix-by-scalar identities
{|class="wikitable" style="text-align: center;"
|+ Identities: matrix-by-scalar
! scope="col" width="175" | Condition
! scope="col" width="100" | Expression
! scope="col" width="100" | Numerator layout, i.e. by
|-
| || ||
|-
| , are not functions of x, || ||
|-
| , || ||
|-
| , || ||
|-
| , || ||
|-
| , || ||
|-
| || ||
|-
| || ||
|-
| is not a function of , is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. , , , , etc.); is the equivalent scalar function, is its derivative, and is the corresponding matrix function || ||
|-
| is not a function of || ||
|}
Scalar-by-scalar identities
With vectors involved
{|class="wikitable" style="text-align: center;"
|+ Identities: scalar-by-scalar, with vectors involved
! scope="col" width="150" | Condition
! scope="col" width="10" | Expression
! scope="col" width="150" | Any layout (assumes dot product ignores row vs. column layout)
|-
| || ||
|-
| , ||
|
|}
With matrices involved
{|class="wikitable" style="text-align: center;"
|+Identities: scalar-by-scalar, with matrices involved
! scope="col" width="175" | Condition
! scope="col" width="100" | Expression
! scope="col" width="100" | Consistent numerator layout,i.e. by and
! scope="col" width="100" | Mixed layout,i.e. by and
|-
| || || colspan=2|
|-
| || || colspan=2|
|-
| ||
| colspan=2 |
|-
|
|
|
|
|-
| is not a function of , is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. , , , , etc.); is the equivalent scalar function, is its derivative, and is the corresponding matrix function. || || colspan=2|
|-
| is not a function of || || colspan=2|
|}
Identities in differential form
It is often easier to work in differential form and then convert back to normal derivatives. This only works well using the numerator layout. In these rules, is a scalar.
{|class="wikitable" style="text-align: center;"
|+ Differential identities: scalar involving matrix
! Expression !! Result (numerator layout)
|-
| ||
|-
| ||
|-
| ||
|}
{|class="wikitable" style="text-align: center;"
|+ Differential identities: matrix
! Condition !! Expression !! Result (numerator layout)
|-
|A is not a function of || ||
|-
|a is not a function of || ||
|-
| || ||
|-
| || ||
|-
| (Kronecker product) || ||
|-
| (Hadamard product) || ||
|-
| || ||
|-
|
|
|
|-
| (conjugate transpose) || ||
|-
| is a positive integer || ||
|-
|
|
|
|-
|
|
|
|-
| is diagonalizable
is differentiable at every eigenvalue
|
|
|}
In the last row, is the Kronecker delta and is the set of orthogonal projection operators that project onto the -th eigenvector of .
is the matrix of eigenvectors of , and are the eigenvalues.
The matrix function is defined in terms of the scalar function for diagonalizable matrices by where with
To convert to normal derivative form, first convert it to one of the following canonical forms, and then use these identities:
{|class="wikitable" style="text-align: center;"
|+ Conversion from differential to derivative form
! Canonical differential form !! Equivalent derivative form (numerator layout)
|-
| ||
|-
| ||
|-
| ||
|-
| ||
|-
| ||
|-
| ||
|}
Applications
Matrix differential calculus is used in statistics and econometrics, particularly for the statistical analysis of multivariate distributions, especially the multivariate normal distribution and other elliptical distributions.
It is used in regression analysis to compute, for example, the ordinary least squares regression formula for the case of multiple explanatory variables.
It is also used in random matrices, statistical moments, local sensitivity and statistical diagnostics.
See also
Derivative (generalizations)
Product integral
Ricci calculus
Tensor derivative
Notes
References
Further reading
. Note that this Wikipedia article has been nearly completely revised from the version criticized in this article.
External links
Software
MatrixCalculus.org, a website for evaluating matrix calculus expressions symbolically
NCAlgebra, an open-source Mathematica package that has some matrix calculus functionality
SymPy supports symbolic matrix derivatives in its matrix expression module, as well as symbolic tensor derivatives in its array expression module.
Information
Matrix Reference Manual, Mike Brookes, Imperial College London.
Matrix Differentiation (and some other stuff), Randal J. Barnes, Department of Civil Engineering, University of Minnesota.
Notes on Matrix Calculus, Paul L. Fackler, North Carolina State University.
Matrix Differential Calculus (slide presentation), Zhang Le, University of Edinburgh.
Introduction to Vector and Matrix Differentiation (notes on matrix differentiation, in the context of Econometrics), Heino Bohn Nielsen.
A note on differentiating matrices (notes on matrix differentiation), Pawel Koval, from Munich Personal RePEc Archive.
Vector/Matrix Calculus More notes on matrix differentiation.
Matrix Identities (notes on matrix differentiation), Sam Roweis.
Matrix theory
Linear algebra
Multivariable calculus | Matrix calculus | [
"Mathematics"
] | 7,703 | [
"Linear algebra",
"Multivariable calculus",
"Algebra",
"Calculus"
] |
1,766,035 | https://en.wikipedia.org/wiki/Journal%20of%20Fluid%20Mechanics | The Journal of Fluid Mechanics is a peer-reviewed scientific journal in the field of fluid mechanics. It publishes original work on theoretical, computational, and experimental aspects of the subject.
The journal is published by Cambridge University Press and retains a strong association with the University of Cambridge, in particular the Department of Applied Mathematics and Theoretical Physics (DAMTP). Until January 2020, volumes were published twice a month in a single-column B5 format, but the publication is now online-only with the same frequency.
The journal was established in 1956 by George Batchelor, who remained the editor-in-chief for some forty years. He started out as the sole editor, but later a team of associate editors provided assistance in arranging the review of articles. Detlef Lohse is the author who has most papers (169 times) appeared in this journal.
Editors
The following people have been editor (later, editor in chief) of the Journal of Fluid Mechanics:
1956–1996: George Batchelor (DAMTP)
1966–1983: Keith Moffatt (DAMTP)
1996–2000: David Crighton (DAMTP)
2000–2006: Tim Pedley (DAMTP)
2000–2010: Stephen H. Davis (Northwestern University)
2007–2022: Grae Worster (DAMTP)
2022–present: Colm-cille P. Caulfield (DAMTP)
See also
List of fluid mechanics journals
References
Further reading
Huppert, H. E. (2006). 50 Years of Impact of JFM.
External links
Engineering education in the United Kingdom
Fluid dynamics journals
Academic journals established in 1956
Cambridge University Press academic journals
Biweekly journals
English-language journals
Cambridge University academic journals | Journal of Fluid Mechanics | [
"Chemistry"
] | 348 | [
"Fluid dynamics journals",
"Fluid dynamics"
] |
1,766,518 | https://en.wikipedia.org/wiki/Linkage%20isomerism | In chemistry, linkage isomerism or ambidentate isomerism is a form of isomerism in which certain coordination compounds have the same composition but differ in their metal atom's connectivity to a ligand.
Typical ligands that give rise to linkage isomers are:
cyanide, – isocyanide,
cyanate, – isocyanate,
thiocyanate, – isothiocyanate,
selenocyanate, – isoselenocyanate,
nitrite,
sulfite,
Examples of linkage isomers are violet-colored and orange-colored . The isomerization of the S-bonded isomer to the N-bonded isomer occurs intramolecularly.
The complex cis-dichlorotetrakis(dimethylsulfoxide)ruthenium(II) () exhibits linkage isomerism of dimethyl sulfoxide ligands due to S- vs. O-bonding. Trans-dichlorotetrakis(dimethylsulfoxide)ruthenium(II) does not exhibit linkage isomers.
History
Linkage isomerism was first noted for nitropentaamminecobalt(III) chloride, . This cationic cobalt complex can be isolated as either of two linkage isomers. In the yellow-coloured isomer, the nitro ligand is bound through nitrogen. In the red linkage isomer, the nitrito is bound through one oxygen atom. The O-bonded isomer is often written as . Although the existence of the isomers had been known since the late 1800s, only in 1907 was the difference explained. It was later shown that the red isomer converted to the yellow isomer upon UV-irradiation. In this particular example, the formation of the nitro isomer () from the nitrito isomer () occurs by an intramolecular rearrangement.
References
Coordination chemistry
Chemical bonding
Isomerism | Linkage isomerism | [
"Physics",
"Chemistry",
"Materials_science"
] | 412 | [
"Coordination chemistry",
"Stereochemistry",
"Condensed matter physics",
"nan",
"Isomerism",
"Chemical bonding"
] |
1,766,681 | https://en.wikipedia.org/wiki/Q-ball | In theoretical physics, Q-ball is a type of non-topological soliton. A soliton is a localized field configuration that is stable—it cannot spread out and dissipate. In the case of a non-topological soliton, the stability is guaranteed by a conserved charge: the soliton has lower energy per unit charge than any other configuration. (In physics, charge is often represented by the letter "Q", and the soliton is spherically symmetric, hence the name.)
Intuitive explanation
A Q-ball arises in a theory of bosonic particles when there is an attraction between the particles. Loosely speaking, the Q-ball is a finite-sized "blob" containing a large number of particles. The blob is stable against fission into smaller blobs and against "evaporation" via emission of individual particles, because, due to the attractive interaction, the blob is the lowest-energy configuration of that number of particles. (This is analogous to the fact that nickel-62 is the most stable nucleus because it is the most stable configuration of neutrons and protons. However, nickel-62 is not a Q-ball, in part because neutrons and protons are fermions, not bosons.)
For there to be a Q-ball, the number of particles must be conserved (i.e. the particle number is a conserved "charge", so the particles are described by a complex-valued field ), and the interaction potential of the particles must have a negative (attractive) term. For non-interacting particles, the potential would be just a mass term , and there would be no Q-ball. But if one adds an attractive term (and positive higher powers of to ensure that the potential has a lower bound), then there are values of where , i.e. the energy of these field values is less than the energy of a free field. This corresponds to saying that one can create blobs of non-zero field (i.e. clusters of many particles) whose energy is lower than the same number of individual particles far apart. Those blobs are therefore stable against evaporation into individual particles.
Construction
In its simplest form, a Q-ball is constructed in a field theory of a complex scalar field , in which Lagrangian is invariant under a global symmetry. The Q-ball solution is a state that minimizes energy while keeping the charge Q associated with the global symmetry constant. A particularly transparent way of finding this solution is via the method of Lagrange multipliers. In particular, in three spatial dimensions we must minimize the functional
where the energy is defined as
and is our Lagrange multiplier. The time dependence of the Q-ball solution can be obtained easily if one rewrites the functional as
where . Since the first term in the functional is now positive, minimization of this terms implies
We therefore interpret the Lagrange multiplier as the frequency of oscillation of the field within the Q-ball.
The theory contains Q-ball solutions if there are any values of at which the potential is less than . In this case, a volume of space with the field at that value can have an energy per unit charge that is less than , meaning that it cannot decay into a gas of individual particles. Such a region is a Q-ball. If it is large enough, its interior is uniform and is called "Q-matter". (For a review see Lee et al. (1992).
Thin-wall Q-balls
The thin-wall Q-ball was the first to be studied, and this pioneering work was carried out by Sidney Coleman in 1986. For this reason, Q-balls of the thin-wall variety are sometimes called "Coleman Q-balls".
We can think of this type of Q-ball a spherical ball of nonzero vacuum expectation value. In the thin-wall approximation we take the spatial profile of the field to be simply
In this regime the charge carried by the Q-ball is simply . Using this fact, we can eliminate from the energy, such that we have
Minimization with respect to gives
Plugging this back into the energy yields
Now all that remains is to minimize the energy with respect to . We can therefore state that a Q-ball solution of the thin-wall type exists if and only if
for .
When the above criterion is satisfied the Q-ball exists and by construction is stable against decays into scalar quanta. The mass of the thin-wall Q-ball is simply the energy
Although this kind of Q-ball is stable against decay into scalars, it is not stable against decay into fermions if the scalar field has nonzero Yukawa couplings to some fermions. This decay rate was calculated in 1986 by Andrew Cohen, Sidney Coleman, Howard Georgi, and Aneesh Manohar.
History
Configurations of a charged scalar field that are classically stable (stable against small perturbations) were constructed by Rosen
in 1968. Stable configurations of multiple scalar fields were studied by Friedberg, Lee, and Sirlin in 1976. The name "Q-ball" and the proof of quantum-mechanical stability (stability against tunnelling to lower-energy configurations) come from Sidney Coleman.
Occurrence in nature
It has been theorized that dark matter might consist of Q-balls (Frieman et al. 1988, Kusenko et al. 1997) and that Q-balls might play a role in baryogenesis, i.e. the origin of the matter that fills the universe (Dodelson et al. 1990, Enqvist et al. 1997). Interest in Q-balls was stimulated by the suggestion that they arise generically in supersymmetric field theories (Kusenko 1997), so if nature really is fundamentally supersymmetric, then Q-balls might have been created in the early universe and still exist in the cosmos today.
It has been hypothesised that the early universe had many energy lumps that consisted of Q-balls. When these eventually interacted with each other they ‘’popped’’, i.e., dispersed, creating more matter particles than antimatter particles and explaining why matter predominates in the visible universe. It should be possible to verify this by detecting gravitational waves propagated by the ''popping'' of the Q-balls.
Fiction
In the movie Sunshine, the Sun is undergoing a premature death. The movie's science adviser, scientist Brian Cox, proposed "infection" with a Q-ball as the mechanism for this death, but this is mentioned only in the commentary tracks and not in the movie itself.
In the fictional universe of Orion's Arm, Q-balls are one of the speculated sources for the large amounts of antimatter used by certain groups.
In the TV series Sliders, Q-Ball is the nickname given by Rembrandt Brown (Crying Man) to Quinn Mallory.
References
External links
Cosmic anarchists , by Hazel Muir. A popular account of the proposal of Alexander Kusenko.
Hypothetical particles
Quantum field theory
Solitons | Q-ball | [
"Physics"
] | 1,471 | [
"Hypothetical particles",
"Matter",
"Quantum field theory",
"Unsolved problems in physics",
"Quantum mechanics",
"Physics beyond the Standard Model",
"Subatomic particles"
] |
1,766,982 | https://en.wikipedia.org/wiki/Novichok | Novichok () is a family of nerve agents, some of which are binary chemical weapons. The agents were developed at the GosNIIOKhT state chemical research institute by the Soviet Union and Russia between 1971 and 1993. Some Novichok agents are solids at standard temperature and pressure, while others are liquids. Dispersal of solid form agents is thought possible if in ultrafine powder state.
Russian scientists who developed the nerve agents claim they are the deadliest ever made, with some variants possibly five to eight times more potent than VX, and others up to ten times more potent than soman. Iran has also been associated with the production of such chemical agents.
In the twenty-first century, Novichok agents came to public attention after they were used to poison opponents of the Russian government, including the Skripals and two others in Amesbury, UK (2018), as well as Alexei Navalny (2020), but Russian civil poisonings with this substance have been known since at least 1995.
In November 2019, the Organisation for the Prohibition of Chemical Weapons (OPCW), which is the executive body for the Chemical Weapons Convention (CWC), added the Novichok agents to "list of controlled substances" of the CWC "in one of the first major changes to the treaty since it was agreed in the 1990s" in response to the 2018 poisonings in the UK.
Design objectives
Novichok agents were designed to achieve four objectives:
to be undetectable using standard 1970s and 1980s NATO chemical detection equipment;
to defeat NATO chemical protective gear;
to be safer to handle; and
to circumvent the Chemical Weapons Convention list of controlled precursors, classes of chemical and physical form.
Some of these agents are binary weapons, in which precursors for the nerve agents are mixed in a munition to produce the agent just prior to its use. The precursors are generally significantly less hazardous than the agents themselves, so this technique makes handling and transporting the munitions a great deal simpler. Additionally, precursors to the agents are usually much easier to stabilise than the agents themselves, so this technique also makes it possible to increase the shelf life of the agents. This has the disadvantage that careless preparation may produce a non-optimal agent. During the 1980s and 1990s, binary versions of several Soviet agents were developed and are designated as "Novichok" agents.
History and disclosure
Novichok agents were designed as part of a Soviet program codenamed Foliant. Five Novichok variants are believed to have been adapted for military use. The most versatile is A-232 (Novichok-5). Novichok agents have never been used on the battlefield. The UK government determined that a Novichok agent was used in the poisoning of Sergei and Yulia Skripal in Salisbury, Wiltshire, England in March 2018. It was unanimously confirmed by four laboratories around the world, according to the OPCW.
Novichok was also involved in the poisoning of a British couple in Amesbury, Wiltshire, four months later, believed to have been caused by residual nerve agent discarded after the Salisbury attack. The attacks led to the death of one person, left three others in a critical condition from which they recovered, and briefly hospitalised a police officer. The Russian government denies producing or researching agents "under the title Novichok". In September 2020, the German government said that opposition figure and anti-corruption activist Alexei Navalny, who was evacuated from Omsk to Berlin for treatment in late August after becoming ill during his flight, was poisoned by a Novichok agent.
Novichok has been known to most Western intelligence services since the 1990s, and in 2016 Iranian chemists working at a university in Tehran synthesised five of the seven Novichok agents for analysis and produced detailed mass spectroscopy data which was added to the OPCW's Central Analytical Database. Previously, there had been no detailed descriptions of their spectral properties in peer-reviewed general scientific literature. A small amount of agent A-230 was also claimed to have been synthesised in the Czech Republic in 2017 for the purpose of obtaining analytical data to help defend against these novel toxic compounds.
The Soviet Union and Russia reportedly developed extremely potent fourth-generation chemical weapons from the 1970s until the early 1990s, according to a publication by two chemists, Lev Fyodorov and Vil Mirzayanov, in Moskovskiye Novosti weekly in 1992. The publication appeared just on the eve of Russia's signing of the Chemical Weapons Convention. According to Mirzayanov, the Russian Military Chemical Complex (MCC) was using defence conversion money received from the West for development of a chemical warfare facility. Mirzayanov made his disclosure out of environmental concerns. He was the head of a counter-intelligence department and performed measurements outside the chemical weapons facilities to make sure that foreign spies could not detect any traces of production. To his horror, the levels of deadly substances were eighty times greater than the maximum safe concentration.
The Prosecutor-General of Russia effectively admitted the existence of Novichok agents when he brought a treason case against Mirzayanov. According to expert witness testimonies that three scientists prepared for the KGB, Novichok and other related chemical agents had indeed been produced and therefore Mirzayanov's disclosure represented high treason.
Mirzayanov was arrested on 22 October 1992 and sent to Lefortovo prison for divulging state secrets. He was released later because "not one of the formulas or names of poisonous substances in the Moscow News article was new to the Soviet press, nor were locations ... of testing sites revealed." According to Yevgenia Albats, "the real state secret revealed by Fyodorov and Mirzayanov was that generals had lied—and were still lying—to both the international community and their fellow citizens." Mirzayanov now lives in the U.S.
Further disclosures followed when Vladimir Uglev, one of Russia's leading binary weapons scientists, revealed the existence of A-232/Novichok-5 in an interview with the magazine Novoye Vremya in early 1994. In his 1998 interview with David E. Hoffman for The Washington Post the chemist claimed that he helped invent the A-232 agent, that it was more frostproof, and confirmed that a binary version has been developed from it. Uglev revealed more details in 2018, following the poisoning of the Skripals, stating that "several hundred" compounds were synthesised during the Foliant research but only four agents were weaponised (presumably the Novichok-5, −7, −8 and −9 mentioned by other sources): the first three were liquids and only the last, which was not developed until 1980, could be made into a powder. Unlike the interview twenty years earlier, he denied any binary agents were developed successfully, at least up until his involvement in the research ceased in 1994.
In the 1990s, the German Federal Intelligence Service (BND) obtained a sample of one Novichok agent from a Russian scientist, and the sample was analysed in Sweden, according to a 2018 Reuters report. The chemical formula was given to Western NATO countries, who synthesized it, then used small amounts to test protective equipment, detection of it, and antidotes to it.
Novichok was referred to in a patent filed in 2008 for an organophosphorus poisoning treatment. The University of Maryland, Baltimore research was funded in part by the U.S. Army.
Professor Leonid Rink, who said he had participated in the creation of Novichok agents, confirmed that the structures leaked by Mirzayanov were the correct ones. Rink was himself convicted in Russia for illegally selling a Novichok agent used in 1995 to assassinate a banker, Ivan Kivelidi, and his secretary.
David Wise, in his book Cassidy's Run, implies that the Soviet program may have been the unintended result of misleading information, involving a discontinued American program to develop a nerve agent code named "GJ", that was fed by a double agent to the Soviets as part of Operation Shocker.
Development and test sites
Stephanie Fitzpatrick, an American geopolitical consultant, has claimed that the Chemical Research Institute in Nukus, Soviet Uzbekistan, produced Novichok agents, and The New York Times has reported that U.S. officials said the site was the major research and testing site for Novichok agents. Small, experimental batches of the weapons may have been tested on the nearby Ustyurt Plateau. Fitzpatrick also writes that the agents may have been tested in a research centre in Krasnoarmeysk near Moscow. Precursor chemicals were made at the Pavlodar Chemical Plant in Soviet Kazakhstan, which was also thought to be the intended Novichok weapons production site, until its still-under-construction chemical warfare agent production building was demolished in 1987 in view of the forthcoming 1990 Chemical Weapons Accord and the Chemical Weapons Convention.
Since its independence in 1991, Uzbekistan has been working with the government of the United States to dismantle and decontaminate the sites where the Novichok agents and other chemical weapons were tested and developed. Between 1999 and 2002 the United States Department of Defense dismantled the major research and testing site for Novichok at the Chemical Research Institute in Nukus, under a $6 million Cooperative Threat Reduction programe.
Hamish de Bretton-Gordon, a British chemical weapons expert and former commanding officer of the UK's Joint Chemical, Biological, Radiation and Nuclear Regiment and its NATO equivalent, "dismissed" suggestions that Novichok agents could be found in other places in the former Soviet Union such as Uzbekistan and has asserted that Novichok agents were produced only at Shikhany in Saratov Oblast, Russia. Mirzayanov also says that it was at Shikhany, in 1973, that scientist Pyotr Petrovich Kirpichev first produced Novichok agents; Vladimir Uglev joined him on the project in 1975. According to Mirzayanov, while production took place in Shikhany, the weapon was tested at Nukus between 1986 and 1989.
Following the poisoning of the Skripals, former head of the GosNIIOKhT security department Nikolay Volodin confirmed in an interview to Novaya Gazeta that there have been tests at Nukus, and said that dogs were used.
In May 2018, the Irish Independent reported that "Germany's foreign intelligence service secured a sample of the Soviet-developed nerve agent Novichok in the 1990s and passed on its knowledge to partners including Britain and the US, according to German media reports." The sample was analysed in Sweden. Small amounts of the Novichok nerve agent were subsequently produced in some NATO countries for test purposes.
Description of Novichok agents
Mirzayanov provided the first description of these agents. Dispersed in an ultra-fine powder instead of a gas or a vapour, they have unique qualities. A binary agent was then created that would mimic the same properties but would either be manufactured using materials which are not controlled substances under the CWC, or be undetectable by treaty regime inspections. The most potent compounds from this family, Novichok-5 and Novichok-7, are supposedly around five to eight times more potent than VX. The "Novichok" designation refers to the binary form of the agent, with the final compound being referred to by its code number (e.g. A-232). The first Novichok series compound was in fact the binary form of a known V-series nerve agent, VR, while the later Novichok agents are the binary forms of compounds such as A-232 and A-234.
According to a classified (secret) report by the US Army National Ground Intelligence Center in Military Intelligence Digest dated 24 January 1997, agent designated A-232 and its ethyl analogue A-234 developed under the Foliant programme "are as toxic as VX, as resistant to treatment as soman, and more difficult to detect and easier to manufacture than VX". The binary versions of the agents reportedly use acetonitrile and an organic phosphate "that can be disguised as a pesticide precursor."
The agent A-234 is also supposedly around five to eight times more potent than VX.
The median lethal dose for inhaled A-234 has been estimated as 7 mg/m3 for two minute exposure (minute volume of 15 L, slight activity). The median lethal dose for inhaled A-230, likely the most toxic liquid Novichok, has been estimated as between 1.9 and 3 mg/m3 for two minute exposure. Thus the median lethal dose for inhaled A-234 is 0.2 mg (5000 lethal doses in a gram) and is below 0.1 mg for A-230 (10 000 lethal doses in a gram).
The agents are reportedly capable of being delivered as a liquid, aerosol or gas via a variety of systems, including artillery shells, bombs, missiles and spraying devices.
Controversy over formulation
Mirzayanov gives somewhat different structures for Novichok agents in his autobiography than those which have been identified by Western experts. The Western formulations suffered from imperfect information, as can be seen in Fig. 1 of Chai et al in which Mirzayanov describes a family of compounds whereas Western scientists instantiate a particular salt.
Mirzyanov makes clear that a large number of compounds were made, and many of the less potent derivatives were reported in the open literature as new organophosphate insecticides, so that the secret chemical weapons program could be disguised as legitimate pesticide research.
Chemistry
According to chemical weapons expert Jonathan Tucker, the first binary formulation developed under the Foliant programme was used to make Substance 33 (VR), very similar to the more widely known VX, differing only in the alkyl substituents on its nitrogen and oxygen atoms. "This weapon was given the code name Novichok."
A wide range of potential structures have been reported. These all feature the classical organophosphorus core (sometimes with the P=O replaced with P=S or P=Se), which is most commonly depicted as being a phosphoramidate or phosphonate, usually fluorinated (cf. monofluorophosphate). The organic groups are subject to more variety; however, a common substituent is phosgene oxime or analogues thereof. This is a potent chemical weapon in its own right, specifically as a nettle agent, and would be expected to increase the harm done by the Novichok agent. Many claimed structures from this group also contain cross-linking agent motifs which may covalently bind to the acetylcholinesterase enzyme's active site in several places, perhaps explaining the rapid denaturing of the enzyme that is claimed to be characteristic of the Novichok agents.
Zoran Radić, a chemist at the University of California, San Diego, performed an in silico docking study with Mirzayanov's version of the A-232 structure against the active site of the acetylcholinesterase enzyme. The model predicted a tight fit with high binding affinity and formation of a covalent bond to a serine residue in the active site, with a similar binding mode to established nerve agents such as sarin and soman.
Detection
A procedure of retrospective detection of Novichok type poisons in victim's tissues was proposed in 2021-2. This method is a modification of the procedure that was developed earlier for identification of sarin poisoning. This method capitalizes on the fact that poisoning by organic phosphonates occurs via phosphonylation of the hydroxy group of serine in the active site of cholinesterases, and that severe poisoning occurs when a major part of these enzymes are inactivated. The concentration of butyryl cholinesterase (HuBuChE) in human plasma is normally about 80 nM. That makes it a good source of adducts that can be subjected to analysis.
The procedure consists of three steps (see the Figure A). First, HuBuChE is obtained from the victim's plasma. Second, the enzyme is subjected to pepsin proteolysis. Third, the peptide mixture obtained is subjected to LC-MSMS analysis. If no poisoning took place, the peptide mixture contains a non-modified nonapeptide FGESAGAAS. However, cholinesterases are inactivated due to a chemical reaction with Novichok type nerve agent, the modified nonapeptide is be detected, and its exact (high resolution) mass (along with the mass of the secondary ion produced during collision induced dissociation) allows unambiguous identification of the fact of poisoning and the exact structure of the poison. Thus, the example at Figure A shows the masses of the primary and secondary ions obtained from the plasma of the victim poisoned by A-230. If a victim is poisoned by other Novichok type agents, the masses are different.
This method allows identification of poisons at a few parts per billion, but that may be insufficient for reliable detection of the isotopic signature of the adducts, and therefore an unambiguous identification of the geographic origin of the poison.
Lifetime
According to Vladimir Uglev, who headed a group that worked on the development of the Novichok agents, at least one liquid form of Novichok is very stable with a slow evaporation rate and can remain potent for possibly up to 50 years. Insufficient research has been conducted to fully understand its persistence in various situations in the environment.
Effects and countermeasures
As nerve agents, the Novichok agents belong to the class of organophosphate acetylcholinesterase inhibitors. These chemical compounds inhibit the enzyme acetylcholinesterase, preventing the normal breakdown of the neurotransmitter acetylcholine. Acetylcholine concentrations then increase at neuromuscular junctions to cause involuntary contraction of all skeletal muscles (cholinergic crisis). This then leads to respiratory and cardiac arrest (as the victim's heart and diaphragm muscles no longer function normally) and finally death from heart failure or suffocation as copious fluid secretions fill the victim's lungs.
As can be seen with other organophosphate poisonings, Novichok agents may cause lasting nerve damage, resulting in permanent disablement of victims, according to Russian scientists. Their effect on humans was demonstrated by the accidental exposure of Andrei Zheleznyakov, one of the scientists involved in their development, to the residue of an unspecified Novichok agent while working in a Moscow laboratory in May 1987. He was critically injured and took ten days to recover consciousness after the incident. He lost the ability to walk and was treated at a secret clinic in Leningrad for three months afterwards. The agent caused permanent harm, with effects that included "chronic weakness in his arms, a toxic hepatitis that gave rise to cirrhosis of the liver, epilepsy, spells of severe depression, and an inability to read or concentrate that left him totally disabled and unable to work." He never recovered and, after five years of deteriorating health, died in July 1992.
The use of a fast-acting peripheral anticholinergic drug such as atropine can block the receptors where acetylcholine acts to prevent poisoning (as in the treatment for poisoning by other acetylcholinesterase inhibitors). Atropine, however, is difficult to administer safely, because its effective dose for nerve agent poisoning is close to the dose at which patients suffer severe side effects, such as changes in heart rate and thickening of the bronchial secretions, which fill the lungs of someone suffering nerve agent poisoning so that suctioning of these secretions, and other advanced life support techniques, may be necessary in addition to administration of atropine to treat nerve agent poisoning.
In the treatment of nerve agent poisoning, atropine is most often administered along with a Hagedorn oxime such as pralidoxime, obidoxime, TMB-4, or HI-6, which reactivates acetylcholinesterase which has been inactivated by phosphorylation by an organophosphorus nerve agent and relieves the respiratory muscle paralysis caused by some nerve agents. Pralidoxime is not effective in reactivating acetylcholinesterase inhibited by some older nerve agents such as soman or the Novichok nerve agents, described in the literature as being up to eight times more toxic than the nerve agent VX.
The US Army has funded studies of the use of galantamine along with atropine in the treatment of a number of nerve agents, including soman and the Novichok agents. An unexpected synergistic interaction was seen to occur between galantamine (given between five hours before to thirty minutes after exposure) and atropine in an amount of 6 mg/kg or higher. Increasing the dose of galantamine from 5 to 8 mg/kg decreased the dose of atropine needed to protect experimental animals from the toxicity of soman in dosages 1.5 times the LD50 (lethal dose in half the animals studied).
There have been differing claims about the persistence of Novichok and binary precursors in the environment. One view is that it is not affected by normal weather conditions, and may not decompose as quickly as other organophosphates. However, Mirzayanov states that Novichok decomposes within four months.
Instances of usage
Poisoning of Ivan Kivelidi and Zara Ismailova
A Novichok agent was used in 1995 to poison Russian banker , who died three days later in a hospital at the age of 46. The poison was believed to have been applied to Kivelidi's office phone in Moscow. His secretary Zara Ismailova also developed symptoms one month later and then died a day later in a hospital at the age of 35.
Kivelidi was the head of the Russian Business Round Table, and had close ties to Viktor Chernomyrdin, who was at that time Prime Minister of Russia. Russian opposition–linked historians Yuri Felshtinsky and Vladimir Pribylovsky speculated that the murder became "one of the first in the series of poisonings organised by Russia's security services".
The Russian Ministry of Internal Affairs analysed the substance and announced that it was "a phosphorus-based military-grade nerve agent" "whose formula was strictly classified". According to Nesterov, the administrative head of Shikhany, he did not know of "a single case of such poison being sold illegally" and noted that the poison "is used by professional spies".
Vladimir Khutsishvili, a former business partner of Kivelidi's, was subsequently convicted of the killings. According to The Independent, "A closed trial found that his business partner had obtained the substance via intermediaries from an employee of the State Research Institute of Organic Chemistry and Technology (ГосНИИОХТ / GosNIIOKhT), which was involved in the development of Novichok agents. However, Khutsishvili, who claimed that he was innocent, had not been detained at the time of the trial and freely left the country. He was only arrested in 2006 after he returned to Russia, believing that the ten-year old case was closed. Felshtinsky and Pribylovsky claimed that Russia's security services, which had access to the chemical agent, had framed Khutsishvili for the murder, and that the security services had organised the murder on the orders of a senior Russian state official. Boris Kuznetsov, who represented Khutsishvili and believed in his innocence, blames "rogue intelligence officers".
Leonid Rink, an employee of GosNIIOKhT, received a one-year suspended sentence for selling Novichok agents to unnamed buyers "of Chechen ethnicity" soon after the poisoning of Kivelidi and Izmailova.
Poisoning of Sergei and Yulia Skripal
On 12 March 2018, the UK government said that a Novichok agent had been used in an attack in the English city of Salisbury on 4 March 2018 in an attempt to kill former GRU officer Sergei Skripal and his daughter Yulia. British Prime Minister Theresa May said in Parliament: "Either this was a direct action by the Russian state against our country, or the Russian government lost control of its potentially catastrophically damaging nerve agent and allowed it to get into the hands of others." On 13 March the BBC asked Vladimir Putin if Russia was "behind the poisoning of" Skripal and he answered "Get to the bottom of it first then we can discuss it" while he delegated a spokesperson to claim that "a circus show in the British parliament" was the upshot. Boris Johnson, the Foreign Secretary, refused to shake hands with Russian ambassador Alexander Yakovenko as he expressed "outrage" over the attack. On the next day, the UK expelled 23 Russian diplomats after the Russian government refused to meet the UK's deadline of midnight on 13 March 2018 to give an explanation for the use of the substance. Addressing the United Nations Security Council on 15 March, Vassily Nebenzia, the Russian envoy to the UN, responded to the British allegations by denying that Russia had ever produced or researched the agents, stating: "No scientific research or development under the title Novichok were carried out."
After the attack, 21 members of the emergency services and public were checked for possible exposure, and three were hospitalised. As of 12 March, one police officer remained in hospital. Five hundred members of the public were advised to decontaminate their possessions to prevent possible long-term exposure, and 180 members of the military and 18 vehicles were deployed to assist with decontamination at locations in and around Salisbury. Up to 38 people in Salisbury have been affected by the agent to an undetermined extent.
Daniel Gerstein, a former senior official at the U.S. Department of Homeland Security, said it was possible that Novichok nerve agents had been used before in Britain to assassinate Kremlin targets, but had not been detected: "It's entirely likely that we have seen someone expire from this and not realised it. We realised in this case because they were found unresponsive on a park bench. Had it been a higher dose, maybe they would have died and we would have thought it was natural causes."
On 20 March 2018, Ahmet Üzümcü, Director-General of the OPCW, said that it would take "another two to three weeks to finalise the analysis" of samples taken from the poisoning of Skripal. On 3 April 2018, the Defence Science and Technology Laboratory announced that it was "completely confident" that the agent used was Novichok, although they still did not know the "precise source" of the agent. Experts said that their findings did not challenge the conclusions by UK government: "We provided that information to the Government who have then used a number of other sources to come to the conclusions that they have." On 12 April 2018 the OPCW announced that their investigations agreed with the conclusions made by the UK about the identity of the chemical used.
By September 2018, two Russian "tourists", "Alexander Petrov" and "Ruslan Boshirov", had been identified as suspects. They told Margarita Simonyan, the chief editor of RT television, in an interview that they both worked in the sports nutrition business and that: "Those are our real names.. We're afraid to go out, we fear for ourselves, our lives and lives of our loved ones." The Crown Prosecution Service announced enough evidence was obtained by that date "to convict the two men" of the attack, although it did not apply to Russia "for their extradition because Russia does not extradite its own nationals. [...] However, a European Arrest Warrant has been obtained in case they travel to the EU".
In February 2019, the Bellingcat website published precise allegations that identified GRU Major Denis Vyacheslavovich Sergeev as a man who travelled in March 2018 to London under the false identity of Sergei Fedotov. It is claimed with detailed photograph evidence, and phone, travel, passport, and motoring database records that GRU Colonels Alexander Mishkin and Anatoly Chepiga assumed the identities of Petrov and Boshirov, and placed the poison on Skripal's doorknob. On 28 June 2019, it was reported that Sergeyev received instructions from his GRU superior by cell phone on more than ten occasions during his UK visits.
Poisoning of Charlie Rowley and Dawn Sturgess
On 30 June 2018, Charlie Rowley and Dawn Sturgess were found unconscious at a house in Amesbury, Wiltshire, about eight miles from the Salisbury poisoning site. On 4 July 2018, police said that the pair had been poisoned with the same nerve agent as ex-Russian spy Sergei Skripal.
On 8 July 2018, Sturgess died as a result of the poisoning. Rowley regained consciousness and began recovering in hospital. He told his brother Matthew the nerve agent had been in a small perfume or aftershave bottle, which they had found in a park about nine days before spraying themselves with it. The police later closed and fingertip-searched Queen Elizabeth Gardens in Salisbury.
Poisoning of Emilian Gebrev
In the aftermath of the Skripal poisoning, investigative journalists were able to track some of the people involved also in Bulgaria. This is how another suspected poisoning case dating back to April 2015 during their stay in the country was linked to the Novichok nerve agent. The victim was the Bulgarian arms dealer Emilian Gebrev, who shared two hypotheses why he might have been attacked: The first one links to the fact that his arms manufacturing company Dunarit exports defense equipment to Ukraine. The other one relates to an attempt by an offshore company to take over Dunarit. The takeover attempt was ultimately linked to the influential Bulgarian politician and oligarch Delyan Peevski who has historically been funded by Russia's state-owned VTB Bank. In November 2023 Bulgaria sought the extradition of three Russian GRU officers, Sergey Fedotov, Georgi Gorshkov and Sergey Pavlov, suspected of the poisoning incident. Sergei Fedotov was also the alias used by one of the assassins in the Salisbury poisonings.
Poisoning of Alexei Navalny
On 20 August 2020, Russian opposition leader Alexei Navalny fell ill during a flight from Tomsk to Moscow. The plane made an emergency landing in Omsk, where Navalny was hospitalized and put in a medically induced coma. His family suspected his illness was caused by a poison put into a cup of tea he drank before the flight. He was evacuated to the Charité hospital in Berlin, Germany, the following day. On 2 September, the German government said that it had "unequivocal evidence" that Navalny was poisoned by a Novichok agent after tests at a German military lab and had called on the Russian government for an explanation, with labs in France and Sweden corroborating the findings.
On 4 September, the North Atlantic Council was briefed by the German representative on the "appalling assassination attempt on" Navalny. In a post-meeting press conference, Secretary-General Jens Stoltenberg said that NATO allies "agree that Russia has serious questions it must answer", that the OPCW needed to conduct an impartial investigation, that "those responsible for this attack must be brought to justice" and called on Russia to "provide complete disclosure of the Novichok programme to the OPCW."
Navalny had been out of his coma since 7 September.
On 6 October, the OPCW confirmed the presence of a cholinesterase inhibitor from the Novichok group in Navalny's blood and urine samples. At the same time, the OPCW report clarified that Navalny was poisoned with a new type of Novichok, which was not included in the list of controlled chemicals of the Chemical Weapons Convention.
See also
Poison laboratory of the Soviet secret services
Russia and weapons of mass destruction
List of Novichok agents
A-230
A-232
A-234
A-242
A-262 (Novichok-7)
C01-A035
C01-A039
C01-A042
References
Explanatory notes
Citations
General and cited references
Further reading
External links
Acetylcholinesterase inhibitors
Cold War weapons of the Soviet Union
Nerve agents
Organophosphates
Science and technology in the Soviet Union
Soviet chemical weapons program
Soviet inventions | Novichok | [
"Chemistry"
] | 6,730 | [
"Nerve agents",
"Chemical weapons"
] |
10,133,505 | https://en.wikipedia.org/wiki/Glucogenic%20amino%20acid | A glucogenic amino acid (or glucoplastic amino acid) is an amino acid that can be converted into glucose through gluconeogenesis. This is in contrast to the ketogenic amino acids, which are converted into ketone bodies.
The production of glucose from glucogenic amino acids involves these amino acids being converted to alpha keto acids and then to glucose, with both processes occurring in the liver. This mechanism predominates during catabolysis, rising as fasting and starvation increase in severity.
As an example, consider alanine. Alanine is a glucogenic amino acid that the liver's gluconeogenesis process can use to produce glucose.
Muscle cells break down their protein when their blood glucose levels fall, which happens during fasting or periods of intense exercise. The breakdown process releases alanine, which is then transferred to the liver. Through a transamination process, alanine is changed into pyruvate in the liver. Following this, pyruvate is transformed into oxaloacetate, a crucial step in the gluconeogenesis process. It is possible to synthesize glucose from oxaloacetate, ensuring that the blood glucose levels required for the body to produce energy are maintained.
In humans, the glucogenic amino acids are:
Alanine
Arginine
Asparagine
Aspartic acid
Cysteine
Glutamic acid
Glutamine
Glycine
Histidine
Methionine
Proline
Serine
Valine
Amino acids that are both glucogenic and ketogenic, known as amphibolic (mnemonic "PITTT"):
Phenylalanine
Isoleucine
Threonine
Tryptophan
Tyrosine
Only leucine and lysine are not glucogenic (they are only ketogenic).
Glucogenic and ketogenic amino acids are classified according to the metabolic pathways they enter after being broken down. Glucogenic amino acids can be converted into intermediates that feed the gluconeogenesis metabolic pathway, which produces glucose. When necessary, these amino acids can be used to generate glucose. As previously stated, because they can be transformed into glucose via a variety of metabolic pathways, the majority of amino acids (apart from leucine and lysine) are regarded as glucogenic. Alternatively, the breakdown of ketogenic amino acids results in the ketogenic precursors acetyl-CoA and acetoacetate. These substances undergo a process called ketogenesis that produces ketone bodies like acetoacetate, beta-hydroxybutyrate, and acetone.
See also
Glycolysis
List of standard amino acids
Metabolism
References
External links
Amino acid metabolism
Chapter on Amino acid catabolism in Biochemistry by Jeremy Berg, John Tymoczko, Lubert Stryer. Fourth ed. by Lubert Stryer. Accessed 2007-03-17
Amino acid metabolism
Amino acids
Glucogenic amino acids
Nitrogen cycle
Medical mnemonics | Glucogenic amino acid | [
"Chemistry"
] | 632 | [
"Amino acids",
"Biomolecules by chemical classification",
"Nitrogen cycle",
"Metabolism"
] |
10,137,513 | https://en.wikipedia.org/wiki/Drag-divergence%20Mach%20number | The drag-divergence Mach number (not to be confused with critical Mach number) is the Mach number at which the aerodynamic drag on an airfoil or airframe begins to increase rapidly as the Mach number continues to increase. This increase can cause the drag coefficient to rise to more than ten times its low-speed value.
The value of the drag-divergence Mach number is typically greater than 0.6; therefore it is a transonic effect. The drag-divergence Mach number is usually close to, and always greater than, the critical Mach number. Generally, the drag coefficient peaks at Mach 1.0 and begins to decrease again after the transition into the supersonic regime above approximately Mach 1.2.
The large increase in drag is caused by the formation of a shock wave on the upper surface of the airfoil, which can induce flow separation and adverse pressure gradients on the aft portion of the wing. This effect requires that aircraft intended to fly at supersonic speeds have a large amount of thrust. In early development of transonic and supersonic aircraft, a steep dive was often used to provide extra acceleration through the high-drag region around Mach 1.0.
This steep increase in drag gave rise to the popular false notion of an unbreakable sound barrier, because it seemed that no aircraft technology in the foreseeable future would have enough propulsive force or control authority to overcome it. Indeed, one of the popular analytical methods for calculating drag at high speeds, the Prandtl–Glauert rule, predicts an infinite amount of drag at Mach 1.0.
Two of the important technological advancements that arose out of attempts to conquer the sound barrier were the Whitcomb area rule and the supercritical airfoil. A supercritical airfoil is shaped specifically to make the drag-divergence Mach number as high as possible, allowing aircraft to fly with relatively lower drag at high subsonic and low transonic speeds. These, along with other advancements including computational fluid dynamics, have been able to reduce the factor of increase in drag to two or three for modern aircraft designs.
Drag-divergence Mach numbers Mdd for a given family of propeller airfoils can be approximated by Korn's relation:
where
is the drag-divergence Mach number,
is the coefficient of lift of a specific section of the airfoil,
t is the airfoil thickness at a given section,
c is the chord length at a given section,
is a factor established through CFD analysis:
K = 0.87 for conventional airfoils (6 series),
K = 0.95 for supercritical airfoils.
See also
Coffin corner
Critical Mach number
Sound barrier
Speed of sound
Supercritical airfoil
Wave drag
Notes
Drag (physics)
Aircraft wing design | Drag-divergence Mach number | [
"Chemistry"
] | 565 | [
"Drag (physics)",
"Fluid dynamics"
] |
10,137,896 | https://en.wikipedia.org/wiki/Tensor%20product%20of%20quadratic%20forms | In mathematics, the tensor product of quadratic forms is most easily understood when one views the quadratic forms as quadratic spaces. If R is a commutative ring where 2 is invertible, and if and are two quadratic spaces over R, then their tensor product is the quadratic space whose underlying R-module is the tensor product of R-modules and whose quadratic form is the quadratic form associated to the tensor product of the bilinear forms associated to and .
In particular, the form satisfies
(which does uniquely characterize it however). It follows from this that if the quadratic forms are diagonalizable (which is always possible if 2 is invertible in R), i.e.,
then the tensor product has diagonalization
References
Quadratic forms
Tensors | Tensor product of quadratic forms | [
"Mathematics",
"Engineering"
] | 166 | [
"Algebra stubs",
"Tensors",
"Number theory",
"Quadratic forms",
"Algebra"
] |
10,144,966 | https://en.wikipedia.org/wiki/Solenoid%20%28DNA%29 | The solenoid structure of chromatin is a model for the structure of the 30 nm fibre. It is a secondary chromatin structure which helps to package eukaryotic DNA into the nucleus.
Background
Chromatin was first discovered by Walther Flemming by using aniline dyes to stain it. In 1974, it was first proposed by Roger Kornberg that chromatin was based on a repeating unit of a histone octamer and around 200 base pairs of DNA.
The solenoid model was first proposed by John Finch and Aaron Klug in 1976. They used electron microscopy images and X-ray diffraction patterns to determine their model of the structure. This was the first model to be proposed for the structure of the 30 nm fibre.
Structure
DNA in the nucleus is wrapped around nucleosomes, which are histone octamers formed of core histone proteins; two histone H2A-H2B dimers, two histone H3 proteins, and two histone H4 proteins. The primary chromatin structure, the least-packed form, is the 11 nm, or “beads on a string” form, where DNA is wrapped around nucleosomes at relatively regular intervals, as Roger Kornberg proposed.
Histone H1 protein binds to the site where DNA enters and exits the nucleosome, wrapping 147 base pairs around the histone core and stabilising the nucleosome, this structure is a chromatosome. In the solenoid structure, the nucleosomes fold up and are stacked, forming a helix. They are connected by bent linker DNA which positions sequential nucleosomes adjacent to one another in the helix. The nucleosomes are positioned with the histone H1 proteins facing toward the centre where they form a polymer. Finch and Klug determined that the helical structure had only one-start point because they mostly observed small pitch angles of 11 nm, which is about the same diameter as a nucleosome. There are approximately 6 nucleosomes in each turn of the helix. Finch and Klug actually observed a wide range of nucleosomes per turn but they put this down to flattening.
Finch and Klug's electron microscopy images had a lack of visible detail so they were unable to determine helical parameters other than the pitch. More recent electron microscopy images have been able to define the dimensions of solenoid structures and identified it as a left-handed helix. The structure of solenoids are insensitive to changes in the length of the linker DNA.
Function
The solenoid structure's most obvious function is to help package the DNA so that it is small enough to fit into the nucleus. This is a big task as the nucleus of a mammalian cell has a diameter of approximately 6 μm, whilst the DNA in one human cell would stretch to just over 2 metres long if it were unwound. The "beads on a string" structure can compact DNA to 7 times smaller. The solenoid structure can increase this to be 40 times smaller.
When DNA is compacted into the solenoid structure can still be transcriptionally active in certain areas. It is the secondary chromatin structure that is important for this transcriptional repression as in vivo active genes are assembled in large tertiary chromatin structures.
Formation
There are many factors that affect whether the solenoid structure will form or not. Some factors alter the structure of the 30 nm fibre, and some prevent it from forming in that region altogether.
The concentration of ions, particularly divalent cations affects the structure of the 30 nm fibre, which is why Finch and Klug were not able to form solenoid structures in the presence of chelating agents.
There is an acidic patch on the surface of histone H2A and histone H2B proteins which interacts with the tails of histone H4 proteins in adjacent nucleosomes. These interactions are important for solenoid formation. Histone variants can affect solenoid formation, for example H2A.Z is a histone variant of H2A, and it has a more acidic patch than the one on H2A, so H2A.Z would have a stronger interaction with histone H4 tails and probably contribute to solenoid formation.
The histone H4 tail is essential for formation of 30 nm fibres. However, acetylation of core histone tails affects the folding of chromatin by destabilising interactions between the DNA and the nucleosomes, making histone modulation a key factor in solenoid structure. Acetylation of H4K16 (the lysine which is the 16th amino acid from the N-terminal of histone H4) inhibits 30 nm fibre formation.
To decompact the 30 nm fibre, for instance to transcriptionally activate it, both H4K16 acetylation and removal of the histone H1 proteins are required.
Further packaging
Chromatin can form a tertiary chromatin structure and be compacted even further than the solenoid structure by forming supercoils which have a diameter of around 700 nm. This supercoil is formed by regions of DNA called scaffold/matrix attachment regions (SMARs) attaching to a central scaffolding matrix in the nucleus creating loops of solenoid chromatin between 4.5 and 112 kilobase pairs long. The central scaffolding matrix itself forms a spiral shape for an additional layer of compaction.
Alternative models
Several other models have been proposed and there is still a lot of uncertainty about the structure of the 30 nm fibre.
Even the more recent research produces conflicting information. There is data from electron microscopy measurements of the 30 nm fibre dimensions that has physical constraints which mean it can only be modelled with a one-start helical structure like the solenoid structure. It also shows there is no linear relationship between the length of the linker DNA and the dimensions (instead there are two distinct classes). There is also data from experiments which cross-linked nucleosomes that shows a two-start structure.
There is evidence that suggests both the solenoid and zig-zag (two-start) structures are present in 30 nm fibres. It is possible that chromatin structure may not be as ordered as previously thought, or that the 30 nm fibre may not even be present in situ.
Two-start twisted-ribbon model
The two-start twisted-ribbon model was proposed in 1981 by Worcel, Strogatz and Riley. This structure involves alternating nucleosomes stacking to form two parallel helices, with the linker DNA zig-zagging up and down the helical axis.
Two-start cross-linker model
The two-start cross-linker model was proposed in 1986 by Williams et al. This structure, like the two-start twisted-ribbon model, involves alternating nucleosomes stacking to form two parallel helices, but the nucleosomes are on opposite sides of the helices with the linker DNA crossing across the centre of the helical axis.
Superbead model
The superbead model was proposed by Renz in 1977. This structure is not helical like the other models, it instead consists of discrete globular structures along the chromatin which vary in size.
Some alternative forms of DNA packaging
The chromatin in mammalian sperm is the most condensed form of eukaryotic DNA, it is packaged by protamines rather than nucleosomes, whilst prokaryotes package their DNA through supercoiling.
References
External links
Aaron Klug tells his life story at the Web of Stories: The Solenoid Model
Molecular genetics
DNA | Solenoid (DNA) | [
"Chemistry",
"Biology"
] | 1,609 | [
"Molecular genetics",
"Molecular biology"
] |
19,216,160 | https://en.wikipedia.org/wiki/Equivalent%20impedance%20transforms | An equivalent impedance is an equivalent circuit of an electrical network of impedance elements which presents the same impedance between all pairs of terminals as did the given network. This article describes mathematical transformations between some passive, linear impedance networks commonly found in electronic circuits.
There are a number of very well known and often used equivalent circuits in linear network analysis. These include resistors in series, resistors in parallel and the extension to series and parallel circuits for capacitors, inductors and general impedances. Also well known are the Norton and Thévenin equivalent current generator and voltage generator circuits respectively, as is the Y-Δ transform. None of these are discussed in detail here; the individual linked articles should be consulted.
The number of equivalent circuits that a linear network can be transformed into is unbounded. Even in the most trivial cases this can be seen to be true, for instance, by asking how many different combinations of resistors in parallel are equivalent to a given combined resistor. The number of series and parallel combinations that can be formed grows exponentially with the number of resistors, n. For large n the size of the set has been found by numerical techniques to be approximately 2.53n and analytically strict bounds are given by a Farey sequence of Fibonacci numbers. This article could never hope to be comprehensive, but there are some generalisations possible. Wilhelm Cauer found a transformation that could generate all possible equivalents of a given rational, passive, linear one-port, or in other words, any given two-terminal impedance. Transformations of 4-terminal, especially 2-port, networks are also commonly found and transformations of yet more complex networks are possible.
The vast scale of the topic of equivalent circuits is underscored in a story told by Sidney Darlington. According to Darlington, a large number of equivalent circuits were found by Ronald M. Foster, following his and George Campbell's 1920 paper on non-dissipative four-ports. In the course of this work they looked at the ways four ports could be interconnected with ideal transformers and maximum power transfer. They found a number of combinations which might have practical applications and asked the AT&T patent department to have them patented. The patent department replied that it was pointless just patenting some of the circuits if a competitor could use an equivalent circuit to get around the patent; they should patent all of them or not bother. Foster therefore set to work calculating every last one of them. He arrived at an enormous total of 83,539 equivalents (577,722 if different output ratios are included). This was too many to patent, so instead the information was released into the public domain in order to prevent any of AT&T's competitors from patenting them in the future.
2-terminal, 2-element-kind networks
A single impedance has two terminals to connect to the outside world, hence can be described as a 2-terminal, or a one-port, network. Despite the simple description, there is no limit to the number of meshes, and hence complexity and number of elements, that the impedance network may have. 2-element-kind networks are common in circuit design; filters, for instance, are often LC-kind networks and printed circuit designers favour RC-kind networks because inductors are less easy to manufacture. Transformations are simpler and easier to find than for 3-element-kind networks. One-element-kind networks can be thought of as a special case of two-element-kind. It is possible to use the transformations in this section on a certain few 3-element-kind networks by substituting a network of elements for element Zn. However, this is limited to a maximum of two impedances being substituted; the remainder will not be a free choice. All the transformation equations given in this section are due to Otto Zobel.
3-element networks
One-element networks are trivial and two-element, two-terminal networks are either two elements in series or two elements in parallel, also trivial. The smallest number of elements that is non-trivial is three, and there are two 2-element-kind non-trivial transformations possible, one being both the reverse transformation and the topological dual, of the other.
4-element networks
There are four non-trivial 4-element transformations for 2-element-kind networks. Two of these are the reverse transformations of the other two and two are the dual of a different two. Further transformations are possible in the special case of Z2 being made the same element kind as Z1, that is, when the network is reduced to one-element-kind. The number of possible networks continues to grow as the number of elements is increased. For all entries in the following table it is defined:
2-terminal, n-element, 3-element-kind networks
Simple networks with just a few elements can be dealt with by formulating the network equations "by hand" with the application of simple network theorems such as Kirchhoff's laws. Equivalence is proved between two networks by directly comparing the two sets of equations and equating coefficients. For large networks more powerful techniques are required. A common approach is to start by expressing the network of impedances as a matrix. This approach is only good for rational networks. Any network that includes distributed elements, such as a transmission line, cannot be represented by a finite matrix. Generally, an n-mesh network requires an nxn matrix to represent it. For instance the matrix for a 3-mesh network might look like
The entries of the matrix are chosen so that the matrix forms a system of linear equations in the mesh voltages and currents (as defined for mesh analysis):
The example diagram in Figure 1, for instance, can be represented as an impedance matrix by
and the associated system of linear equations is
In the most general case, each branch Zp of the network may be made up of three elements so that
where L, R and C represent inductance, resistance, and capacitance respectively and s is the complex frequency operator .
This is the conventional way of representing a general impedance but for the purposes of this article it is mathematically more convenient to deal with elastance, D, the inverse of capacitance, C. In those terms the general branch impedance can be represented by
Likewise, each entry of the impedance matrix can consist of the sum of three elements. Consequently, the matrix can be decomposed into three nxn matrices, one for each of the three element kinds:
It is desired that the matrix [Z] represent an impedance, Z(s). For this purpose, the loop of one of the meshes is cut and Z(s) is the impedance measured between the points so cut. It is conventional to assume the external connection port is in mesh 1, and is therefore connected across matrix entry Z11, although it would be perfectly possible to formulate this with connections to any desired nodes. In the following discussion Z(s) taken across Z11 is assumed. Z(s) may be calculated from [Z] by
where z11 is the complement of Z11 and |Z| is the determinant of [Z].
For the example network above,
and,
This result is easily verified to be correct by the more direct method of resistors in series and parallel. However, such methods rapidly become tedious and cumbersome with the growth of the size and complexity of the network under analysis.
The entries of [R], [L] and [D] cannot be set arbitrarily. For [Z] to be able to realise the impedance Z(s) then [R],[L] and [D] must all be positive-definite matrices. Even then, the realisation of Z(s) will, in general, contain ideal transformers within the network. Finding only those transforms that do not require mutual inductances or ideal transformers is a more difficult task. Similarly, if starting from the "other end" and specifying an expression for Z(s), this again cannot be done arbitrarily. To be realisable as a rational impedance, Z(s) must be positive-real. The positive-real (PR) condition is both necessary and sufficient but there may be practical reasons for rejecting some topologies.
A general impedance transform for finding equivalent rational one-ports from a given instance of [Z] is due to Wilhelm Cauer. The group of real affine transformations
where
is invariant in Z(s). That is, all the transformed networks are equivalents according to the definition given here. If the Z(s) for the initial given matrix is realisable, that is, it meets the PR condition, then all the transformed networks produced by this transformation will also meet the PR condition.
3 and 4-terminal networks
When discussing 4-terminal networks, network analysis often proceeds in terms of 2-port networks, which covers a vast array of practically useful circuits. "2-port", in essence, refers to the way the network has been connected to the outside world: that the terminals have been connected in pairs to a source or load. It is possible to take exactly the same network and connect it to external circuitry in such a way that it is no longer behaving as a 2-port. This idea is demonstrated in Figure 2.
A 3-terminal network can also be used as a 2-port. To achieve this, one of the terminals is connected in common to one terminal of both ports. In other words, one terminal has been split into two terminals and the network has effectively been converted to a 4-terminal network. This topology is known as unbalanced topology and is opposed to balanced topology. Balanced topology requires, referring to Figure 3, that the impedance measured between terminals 1 and 3 is equal to the impedance measured between 2 and 4. This is the pairs of terminals not forming ports: the case where the pairs of terminals forming ports have equal impedance is referred to as symmetrical. Strictly speaking, any network that does not meet the balance condition is unbalanced, but the term is most often referring to the 3-terminal topology described above and in Figure 3. Transforming an unbalanced 2-port network into a balanced network is usually quite straightforward: all series-connected elements are divided in half with one half being relocated in what was the common branch. Transforming from balanced to unbalanced topology will often be possible with the reverse transformation but there are certain cases of certain topologies which cannot be transformed in this way. For example, see the discussion of lattice transforms below.
An example of a 3-terminal network transform that is not restricted to 2-ports is the Y-Δ transform. This is a particularly important transform for finding equivalent impedances. Its importance arises from the fact that the total impedance between two terminals cannot be determined solely by calculating series and parallel combinations except for a certain restricted class of network. In the general case additional transformations are required. The Y-Δ transform, its inverse the Δ-Y transform, and the n-terminal analogues of these two transforms (star-polygon transforms) represent the minimal additional transforms required to solve the general case. Series and parallel are, in fact, the 2-terminal versions of star and polygon topology. A common simple topology that cannot be solved by series and parallel combinations is the input impedance to a bridge network (except in the special case when the bridge is in balance). The rest of the transforms in this section are all restricted to use with 2-ports only.
Lattice transforms
Symmetric 2-port networks can be transformed into lattice networks using Bartlett's bisection theorem. The method is limited to symmetric networks but this includes many topologies commonly found in filters, attenuators and equalisers. The lattice topology is intrinsically balanced, there is no unbalanced counterpart to the lattice and it will usually require more components than the transformed network.
Reverse transformations from a lattice to an unbalanced topology are not always possible in terms of passive components. For instance, this transform:
cannot be realised with passive components because of the negative values arising in the transformed circuit. It can however be realised if mutual inductances and ideal transformers are permitted, for instance, in this circuit. Another possibility is to permit the use of active components which would enable negative impedances to be directly realised as circuit components.
It can sometimes be useful to make such a transformation, not for the purposes of actually building the transformed circuit, but rather, for the purposes of aiding understanding of how the original circuit is working. The following circuit in bridged-T topology is a modification of a mid-series m-derived filter T-section. The circuit is due to Hendrik Bode who claims that the addition of the bridging resistor of a suitable value will cancel the parasitic resistance of the shunt inductor. The action of this circuit is clear if it is transformed into T topology – in this form there is a negative resistance in the shunt branch which can be made to be exactly equal to the positive parasitic resistance of the inductor.
Any symmetrical network can be transformed into any other symmetrical network by the same method, that is, by first transforming into the intermediate lattice form (omitted for clarity from the above example transform) and from the lattice form into the required target form. As with the example, this will generally result in negative elements except in special cases.
Eliminating resistors
A theorem due to Sidney Darlington states that any PR function Z(s) can be realised as a lossless two-port terminated in a positive resistor R. That is, regardless of how many resistors feature in the matrix [Z] representing the impedance network, a transform can be found that will realise the network entirely as an LC-kind network with just one resistor across the output port (which would normally represent the load). No resistors within the network are necessary in order to realise the specified response. Consequently, it is always possible to reduce 3-element-kind 2-port networks to 2-element-kind (LC) 2-port networks provided the output port is terminated in a resistance of the required value.
Eliminating ideal transformers
An elementary transformation that can be done with ideal transformers and some other impedance element is to shift the impedance to the other side of the transformer. In all the following transforms, r is the turns ratio of the transformer.
These transforms do not just apply to single elements; entire networks can be passed through the transformer. In this manner, the transformer can be shifted around the network to a more convenient location. Darlington gives an equivalent transform that can eliminate an ideal transformer altogether. This technique requires that the transformer is next to (or capable of being moved next to) an "L" network of same-kind impedances. The transform in all variants results in the "L" network facing the opposite way, that is, topologically mirrored.
Example 3 shows the result is a Π-network rather than an L-network. The reason for this is that the shunt element has more capacitance than is required by the transform so some is still left over after applying the transform. If the excess were instead, in the element nearest the transformer, this could be dealt with by first shifting the excess to the other side of the transformer before carrying out the transform.
Terminology
References
Bibliography
Bartlett, A. C., "An extension of a property of artificial lines", Phil. Mag., vol 4, p. 902, November 1927.
Belevitch, V., "Summary of the history of circuit theory", Proceedings of the IRE, vol 50, Iss 5, pp. 848–855, May 1962.
E. Cauer, W. Mathis, and R. Pauli, "Life and Work of Wilhelm Cauer (1900 – 1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems, Perpignan, June, 2000.
Foster, Ronald M.; Campbell, George A., "Maximum output networks for telephone substation and repeater circuits", Transactions of the American Institute of Electrical Engineers, vol.39, iss.1, pp. 230–290, January 1920.
Darlington, S., "A history of network synthesis and filter theory for circuits composed of resistors, inductors, and capacitors", IEEE Trans. Circuits and Systems, vol 31, pp. 3–13, 1984.
Farago, P. S., An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961.
Khan, Sameen Ahmed, "Farey sequences and resistor networks", Proceedings of the Indian Academy of Sciences (Mathematical Sciences), vol.122, iss.2, pp. 153–162, May 2012.
Zobel, O. J.,Theory and Design of Uniform and Composite Electric Wave Filters, Bell System Technical Journal, Vol. 2 (1923), pp. 1–46.
Circuit theorems
Filter theory
Analog circuits
Electronic design | Equivalent impedance transforms | [
"Physics",
"Engineering"
] | 3,521 | [
"Telecommunications engineering",
"Equations of physics",
"Electronic design",
"Analog circuits",
"Filter theory",
"Electronic engineering",
"Circuit theorems",
"Design",
"Physics theorems"
] |
19,216,990 | https://en.wikipedia.org/wiki/Metapsychology | Metapsychology (Greek: meta 'beyond, transcending', and ψυχολογία 'psychology') is that aspect of a psychological theory that discusses the terms that are essential to it, but leaves aside or transcends the phenomena that the theory deals with. Psychology refers to the concrete conditions of the human psyche, metapsychology to psychology itself. (Cf. also the comparison of metaphysics and physics)
The term is used mostly in discourse about psychoanalysis, the psychology developed by Sigmund Freud. In general, his metapsychology represents a technical elaboration of his structural model of the psyche, which divides the organism into three instances: the id is considered the germ from which the ego and the superego emerge. Driven by an energy that Freud called libido in direct reference to Plato's Eros, the instances complement each other through their specific functions in a similar way to the parts of a microscope or organelles of a cell. More precisely defined, metapsychology describes ‘a way of obversation in which every psychic process is analysed according to the three coordinates of dynamics, topics and economy’. Topology refers to the arrangement of these processes in space, dynamics to their movements (variability, also in time) and economy to the energetic reservoir (libido) that drives all life processes, is used up during this and therefore needs to be replenished through nutrition.
These precise concepts led Freud to say that their unified presentation would make it possible to achieve the highest goal of psychology, namely the development of a comprehensively founded model of health. Such an idea is crucial for the diagnostic process because illnesses - the treatment and prevention of which is the focus of all medical activity - can only be recognised in contrast to or as deviations from a state of health.
Freud left this central part of his work to future analysts in the unfinished state of a Torso, since - as he stated - the fields of knowledge required to complete metapsychology were barely developed or did not exist in the first half of the 20th century. This refers above all to ethological primate research and its extension to the field of anthropology. Freud considers findings from these areas of knowledge to be indispensable because without them it is not possible to examine and, where necessary, correct his hypothesis of natural social coexistence in the primordial horde postulated by Darwin (see presented for discussion in Totem and Taboo). The same applies to the hypothetical abolition of horde life through the introduction of monogamy by a corresponding agreement among the sons who killed the primal father of the horde. For the same reasons, Freud's claim also extends to the assumed origin of moral codes of behavior (totemism), the differentiation of sexual from social and intellectual needs (instinctively formed communities versus consciously conceived political superstructures; foundations of belief and knowledge systems), and much more. In Moses and Monotheism, the author refers one last time to the lack of primate research at the time.
The empirical foundations of Freudian metapsychology are neurological processes and close relationships to Darwin's theory of evolution. The libidinal energy, which according to this metapsychology drives all biological and mental processes through its inherent desire, represents in a certain sense a teleological thesis.
More recently it's regarded as a hermeneutics of understanding with relations to Freud's literary sources, especially Sophocles and, to a lesser extent, Goethe and Shakespeare. Interest on the possible scientific status of psychoanalysis has been renewed in the emerging discipline of neuropsychoanalysis, whose major exemplar is Mark Solms. The hermeneutic vision of psychoanalysis is the focus of influential works by Donna Orange.
Freud and the als ob problem
Psychoanalytic metapsychology is concerned with the fundamental structure and concepts of Freudian theory. Sigmund Freud first used the term on 13 February 1896 in a letter to Wilhelm Fliess, to refer to his addition of unconscious processes to the conscious ones of traditional psychology. On March 10, 1898, he wrote to Fliess: "It seems to me that () the theory of wish fulfillment has brought only the psychological solution and not the biological - or, rather, metapsychical - one. (I am going to ask you seriously, by the way, whether I may use the name metapsychology for my psychology that leads behind consciousness)." Three years after completing his unpublished Project for a Scientific Psychology, Freud's optimism had completely vanished. In a letter dated September 22 of that year he told Fliess: "I am not at all in disagreement with you, not at all inclined to leave psychology hanging in the air without an organic basis. But apart from this conviction, I do not know how to go on, neither theoretically nor therapeutically, and therefore must behave as if [als läge] only the psychological were under consideration. Why I cannot fit it together [the organic and the psychological] I have not even begun to fathom". "When, in his 'Autobiographical Study' of 1925, Freud called his metapsychology a 'speculative superstructure'...the elements of which could be abandoned or changed once proven inadequate, he was, in the terminology of Kant's Critique of Judgment, proposing a psychology als ob or as if – a heuristic model of mental functioning that did not necessarily correspond with external reality."
A salient example of Freud's own metapsychology is his characterization of psychoanalysis as a "simultaneously closed system, fundamentally unrelated and impervious to the external world and as an open system inherently connected and responsive to environmental influence.
In the 1910s, Freud wrote a series of twelve essays, to be collected as Preliminaries to a Metapsychology. Five of these were published independently under the titles: "Instincts and Their Vicissitudes," "Repression," "The Unconscious," "A Metapsychological Supplement to the Theory of Dreams," and "Mourning and Melancholia." The remaining seven remained unpublished, an expression of Freud's ambivalence about his own attempts to articulate the whole of his vision of psychoanalysis. In 1919 he wrote to Lou Andreas-Salome, "Where is my Metapsychology? In the first place it remains unwritten". In 1920 he published Beyond the Pleasure Principle, a text with metaphysical ambitions.
Midcentury psychoanalyst David Rapaport defined the term thus: "Books on psychoanalysis usually deal with its clinical theory... there exists, however, a fragmentary—yet consistent—general theory of psychoanalysis, which comprises the premises of the special (clinical) theory, the concepts built on it, and the generalizations derived from it... named metapsychology."
Freud's metapsychology
The topographical point of view: the psyche operates at different levels of consciousness - unconscious, preconscious, and conscious
The dynamic point of view: the notion that there are psychological forces which may conflict with one another at work in the psyche
The economic point of view: the psyche contains charges of energy which are transferred from one element of the psyche to another
The structural point of view: the psyche consists of configurations of psychological processes which operate in different ways and reveal different rates of change - the ego, the id, and the superego
The genetic point of view: the origins - or "genesis" - of psychological processes can be found in developmentally previous psychological processes
Ego psychologist Heinz Hartmann also added 'the adaptive" point of view' to Freud's metapsychology, although Lacan who interpreted metapsychology as the symbolic, the Real, and the imaginary, said "the dimension discovered by analysis is the opposite of anything which progresses through adaptation."
Criticism
Freud's metapsychology has faced criticism, mainly from ego psychology. Object relations theorists such as Melanie Klein, shifted the focus away from intrapsychic conflicts and towards the dynamics of interpersonal relationships, leading to a unifocal theory of development that focused on the mother-child relationship. Most ego psychologists saw the structural point of view, Freud's latest metapsychology, as the most important. Some proposed that only the structural point of view be kept in metapsychology, because the topographical point of view made an unnecessary distinction between the unconscious and the preconscious (Arlow & Brenner) and because the economic point of view was viewed as redundant (Gill).
See also
Philosophy of mind
References
Further reading
1890s neologisms
Behavioural sciences
Philosophy of psychology
Psychoanalytic theory | Metapsychology | [
"Biology"
] | 1,781 | [
"Behavioural sciences",
"Behavior"
] |
15,275,923 | https://en.wikipedia.org/wiki/Glycoinformatics | Glycoinformatics is a field of bioinformatics that pertains to the study of carbohydrates involved in protein post-translational modification. It broadly includes (but is not restricted to) database, software, and algorithm development for the study of carbohydrate structures, glycoconjugates, enzymatic carbohydrate synthesis and degradation, as well as carbohydrate interactions. Conventional usage of the term does not currently include the treatment of carbohydrates from the better-known nutritive aspect.
Issues to consider
Even though glycosylation is the most common form of protein modification, with highly complex carbohydrate structures, the bioinformatics on glycome is still very poor.
Unlike proteins and nucleic acids which are linear, carbohydrates are often branched and extremely complex. For instance, just four sugars can be strung together to form more than 5 million different types of carbohydrates or nine different sugars may be assembled into 15 million possible four-sugar-chains.
Also, the number of simple sugars that make up glycans is more than the number of nucleotides that make up DNA or RNA. Therefore, it is more computationally expensive to evaluate their structures.
One of the main constrains in the glycoinformatics is the difficulty of representing sugars in the sequence form especially due to their branching nature. Owing to the lack of a genetic blue print, carbohydrates do not have a "fixed" sequence. Instead, the sequence is largely determined by the presence of a variety of enzymes, their kinetic differences and variations in the biosynthetic micro-environment of the cells. This increases the complexity of analysis and experimental reproducibility of the carbohydrate structure of interest. It is for this reason that carbohydrates are often considered as the "information poor" molecules.
Databases
Table of major glyco-databases.
References
Bioinformatics
Carbohydrate chemistry
Glycomics | Glycoinformatics | [
"Chemistry",
"Engineering",
"Biology"
] | 437 | [
"Biological engineering",
"Glycomics",
"Bioinformatics",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
15,285,305 | https://en.wikipedia.org/wiki/Allotropes%20of%20iron | At atmospheric pressure, three allotropic forms of iron exist, depending on temperature: alpha iron (α-Fe, ferrite), gamma iron (γ-Fe, austenite), and delta iron (δ-Fe). At very high pressure, a fourth form exists, epsilon iron (ε-Fe, hexaferrum). Some controversial experimental evidence suggests the existence of a fifth high-pressure form that is stable at very high pressures and temperatures.
The phases of iron at atmospheric pressure are important because of the differences in solubility of carbon, forming different types of steel. The high-pressure phases of iron are important as models for the solid parts of planetary cores. The inner core of the Earth is generally assumed to consist essentially of a crystalline iron-nickel alloy with ε structure. The outer core surrounding the solid inner core is believed to be composed of liquid iron mixed with nickel and trace amounts of lighter elements.
Standard pressure allotropes
Alpha iron (α-Fe)
Below 912 °C (1,674 °F), iron has a body-centered cubic (bcc) crystal structure and is known as α-iron or ferrite. It is thermodynamically stable and a fairly soft metal. α-Fe can be subjected to pressures up to ca. 15 GPa before transforming into a high-pressure form termed ε-Fe discussed below.
Magnetically, α-iron is paramagnetic at high temperatures. However, below its Curie temperature (TC or A2) of 771 °C (1044K or 1420 °F), it becomes ferromagnetic. In the past, the paramagnetic form of α-iron was known as beta iron (β-Fe). Even though the slight tetragonal distortion in the ferromagnetic state does constitute a true phase transition, the continuous nature of this transition results in only minor importance in steel heat treating. The A2 line forms the boundary between the beta iron and alpha fields in the phase diagram in Figure 1.
Similarly, the A2 boundary is of only minor importance compared to the A1 (eutectoid), A3 and Acm critical temperatures. The Acm, where austenite is in equilibrium with cementite + γ-Fe, is beyond the right edge in Fig. 1. The α + γ phase field is, technically, the β + γ field above the A2. The beta designation maintains continuity of the Greek-letter progression of phases in iron and steel: α-Fe, β-Fe, austenite (γ-Fe), high-temperature δ-Fe, and high-pressure hexaferrum (ε-Fe).
The primary phase of low-carbon or mild steel and most cast irons at room temperature is ferromagnetic α-Fe. It has a hardness of approximately 80 Brinell. The maximum solubility of carbon is about 0.02 wt% at and 0.001% at . When it dissolves in iron, carbon atoms occupy interstitial "holes". Being about twice the diameter of the tetrahedral hole, the carbon introduces a strong local strain field.
Mild steel (carbon steel with up to about 0.2 wt% C) consists mostly of α-Fe and increasing amounts of cementite (Fe3C, an iron carbide). The mixture adopts a lamellar structure called pearlite. Since bainite and pearlite each contain α-Fe as a component, any iron-carbon alloy will contain some amount of α-Fe if it is allowed to reach equilibrium at room temperature. The amount of α-Fe depends on the cooling process.
A2 critical temperature and induction heating
β-Fe and the A2 critical temperature are important in induction heating of steel, such as for surface-hardening heat treatments. Steel is typically austenitized at 900–1000 °C before it is quenched and tempered. The high-frequency alternating magnetic field of induction heating heats the steel by two mechanisms below the Curie temperature: resistance or Joule heating and ferromagnetic hysteresis losses. Above the A2 boundary, the hysteresis mechanism disappears and the required amount of energy per degree of temperature increase is thus substantially larger than below A2. Load-matching circuits may be needed to vary the impedance in the induction power source to compensate for the change.
Gamma iron (γ-Fe)
When heating iron above 912 °C (1,674 °F), its crystal structure changes to a face-centered cubic (fcc) crystalline structure. In this form it is called gamma iron (γ-Fe) or austenite. γ-iron can dissolve considerably more carbon (as much as 2.04% by mass at 1,146 °C). This γ form of carbon saturation is exhibited in austenitic stainless steel.
Delta iron (δ-Fe)
Peculiarly, above 1,394 °C (2,541 °F), iron changes back into the bcc structure, known as δ-Fe. δ-iron can dissolve as much as 0.08% of carbon by mass at 1,475 °C. It is stable up to its melting point of 1,538 °C (2,800 °F). δ-Fe cannot exist above 5.2 GPa, with austenite instead transitioning directly to a molten phase at these high pressures.
High pressure allotropes
Epsilon iron / Hexaferrum (ε-Fe)
At pressures above approximately 10-13 GPa and temperatures up to around 700 K, α-iron changes into a hexagonal close-packed (hcp) structure, which is also known as ε-iron or hexaferrum; the higher-temperature γ-phase also changes into ε-iron, but generally requires far higher pressures as temperature increases. The triple point of hexaferrum, ferrite, and austenite is 10.5 GPa at 750 K. Antiferromagnetism in alloys of epsilon-Fe with Mn, Os and Ru has been observed.
Experimental high temperature and pressure
An alternate stable form, if it exists, may appear at pressures of at least 50 GPa and temperatures of at least 1,500 K; it has been thought to have an orthorhombic or a double hcp structure. , recent and ongoing experiments are being conducted on high-pressure and superdense carbon allotropes.
Phase transitions
Melting and boiling points
The melting point of iron is experimentally well defined for pressures less than 50 GPa.
For greater pressures, published data (as of 2007) put the γ-ε-liquid triple point at pressures that differ by tens of gigapascals and 1000 K in the melting point. Generally speaking, molecular dynamics computer simulations of iron melting and shock wave experiments suggest higher melting points and a much steeper slope of the melting curve than static experiments carried out in diamond anvil cells.
The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier group 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus; however, they are higher than the values for the previous element manganese because that element has a half-filled 3d subshell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium.
Structural phase transitions
The exact temperatures at which iron will transition from one crystal structure to another depends on how much and what type of other elements are dissolved in the iron. The phase boundary between the different solid phases is drawn on a binary phase diagram, usually plotted as temperature versus percent iron. Adding some elements, such as Chromium, narrows the temperature range for the gamma phase, while others increase the temperature range of the gamma phase. In elements that reduce the gamma phase range, the alpha-gamma phase boundary connects with the gamma-delta phase boundary, forming what is usually called the Gamma loop. Adding Gamma loop additives keeps the iron in a body-centered cubic structure and prevents the steel from suffering phase transition to other solid states.
See also
Tempering (metallurgy)
References
Iron
Iron
Metallurgy | Allotropes of iron | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,699 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Metallurgy",
"Materials science",
"Materials",
"nan",
"Matter"
] |
15,285,777 | https://en.wikipedia.org/wiki/Rebar%20spacer | A rebar spacer is a short, rod-like device used to secure reinforcing steel bars, or rebar, within cast assemblies for reinforced concrete structures. The rebar spacers are fixed before the concrete is poured and remain within the structure.
The main categories of rebar spacers are:
Linear Spacers (Section profiles, H-section profiles, or 3-dimensional shapes),
Point Spacers (wheel spacers, various tower or chair-like shapes)
Rebar spacers can be divided into three raw materials categories:
Concrete spacers
Plastic spacers
Metal spacers
Each of these categories offer advantages which are specific to certain uses.
Plastic spacers (manufactured from polymers) offer a classic, fast solutions that is easily laid on form-works. The newest solutions focus on non-PVC spacers, which bear an added environmental value.
Concrete spacers (manufactured from fibre reinforced concrete) are used in heavy-weight applications, increased fire safety requirement construction (such as tunnels), and pre-cast concrete systems.
Metal spacers are typically used for keeping distance between more than one layer of rebar reinforcement.
The concrete spacers use the same raw material as the pour, which improves the water tightness and strength of the concrete. Plastic spacers have the advantage of low-cost production and fast processing.
Function
The engineering study of every reinforced concrete construction, whether it is a building, a bridge, a bearing wall, or another structure, dictates the positioning of steel rebars at specific positions in the volume of concrete (predicted concrete cover of steel reinforcement bars). This cover varies between 10 mm and 100 mm.
The statics of every concrete construction is designed in such a way that steel and concrete properties are combined in order to achieve the most possible strength for the particular construction (e.g. anti-earthquake protection) as well as to prevent the long-term corrosion of steel that would weaken the construction.
The function of rebar spacers is to maintain the precise positioning of steel reinforcement, facilitating the implementation of theoretical design specifications in concrete construction. This includes ensuring the appropriate steel cover for specific structural elements (such as in a concrete slab or a beam) should be generally uniform within the element.
The use of spacers is particularly important in areas with high earthquake activity in combination with corrosive environments (like proximity to the salt water of the sea), for example Japan, Iran, Greece, California, etc.
Plastic versus concrete spacers and bar supports
Plastic spacers and bar supports
Plastic spacers and bar supports do not bond well with concrete and are not compatible materials. Plastic has mechanical properties (holds the bar in position) but no structural properties, and is a foreign element within the construction.
When the concrete is poured into the form, a small gap is created between the concrete and the plastic. Plastic has a coefficient of thermal expansion and contraction 10 to 15 times that of concrete. When subjected to temperature variations, the plastic continues to expand and contract at the higher coefficient.
At elevated temperatures, plastic may melt. Consequently, this leads to a disconnection between the spacers and the concrete that has been cast. Such separation establishes an unobstructed pathway for corrosive substances to access the steel reinforcement from the concrete product's exterior. This process initiates the corrosion of the steel, which ultimately extends to the concrete.
If steam curing is applied to the concrete, the heat in the curing process causes the plastic to expand while the concrete is relatively fresh and weak. After reaching the maximum curing temperature and volume expansion of the plastic, the temperature is held at this level until the concrete reaches the desired strength. After curing, the subsequent lower temperatures cause the plastic to contract, and a gap remains at the interface between the plastic and concrete.
Plastic spacers are also subject to corrosion when they come into contact with chlorides and chemicals, whereas concrete has a much higher resistance.
Concrete spacers and bar supports
Concrete spacers and bar supports are often made of the same material properties as the poured concrete, so thermal expansion and contraction are equal. As a result, the concrete and spacers will bond without gaps. Often these spacers are manufactured from extruded fiber-reinforced concrete which improves crack resistance.
Concrete spacers and bar support help maintain material integrity and uniformity of the concrete, and provide a cover over the reinforcement that protects against corrosion.
Concrete spacers with a plastic clip or other fixing mechanism
Concrete spacers and bar supports help maintain material integrity and uniformity of the concrete. They provide a cover over the reinforcement that protects against corrosion.
Concrete spacers with a plastic clip or fixing mechanism do not have a negative effect on material integrity and do not weaken the corrosion protective cover over the reinforcement.
The plastic clip or fixing mechanism is hinged from the top of the spacer and does not come into contact with the soffit of the concrete. The plastic clip or fastening mechanism is incorporated at a depth of merely 5 mm into the spacer, thereby preserving the material's integrity at the surface of the product.
This plastic component in the clip or fastening mechanism serves exclusively for attachment and securing of the reinforcement, allowing the concrete segment to fulfil the spacer's functional role.
References
Building engineering
Building materials
Reinforced concrete | Rebar spacer | [
"Physics",
"Engineering"
] | 1,074 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Civil engineering",
"Matter",
"Architecture"
] |
5,976,076 | https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor%20receptor | The fibroblast growth factor receptors (FGFR) are, as their name implies, receptors that bind to members of the fibroblast growth factor (FGF) family of proteins. Some of these receptors are involved in pathological conditions. For example, a point mutation in FGFR3 can lead to achondroplasia.
Structure
The fibroblast growth factor receptors consist of an extracellular ligand domain composed of three immunoglobulin-like domains, a single transmembrane helix domain, and an intracellular domain with tyrosine kinase activity. These receptors bind fibroblast growth factors, members of the largest family of growth factor ligands, comprising 22 members.
The natural alternate splicing of four fibroblast growth factor receptor (FGFR) genes results in the production of over 48 different isoforms of FGFR. These isoforms vary in their ligand-binding properties and kinase domains, however all share the common extracellular region composed of three immunoglobulin(Ig)-like domains (D1-D3), and thus belong to the immunoglobulin superfamily.
The three immunoglobin(Ig)-like domains—D1, D2, and D3—present a stretch of acidic amino acids ("the acid box") between D1 and D2. This "acid box" can participate in the regulation of FGF binding to the FGFR. Immunoglobulin-like domains D2 and D3 are sufficient for FGF binding. Each receptor can be activated by several FGFs. In many cases, the FGFs themselves can also activate more than one receptor (i.e., FGF1, which binds all seven principal FGFRs). FGF7, however, can only activate FGFR2b, and FGF18 was recently shown to activate FGFR3.
A gene for a fifth FGFR protein, FGFR5, has also been identified. In contrast to FGFRs 1-4, it lacks a cytoplasmic tyrosine kinase domain and one isoform, FGFR5γ, and only contains the extracellular domains D1 and D2. The FGFRs are known to dimerize as heterodimers and homodimers.
Genes
So far, five distinct membrane FGFR have been identified in vertebrates and all of them belong to the tyrosine kinase superfamily (FGFR1 to FGFR4).
(see also Fibroblast growth factor receptor 1) (= CD331)
(see also Fibroblast growth factor receptor 2) (= CD332)
(see also Fibroblast growth factor receptor 3) (= CD333)
(see also Fibroblast growth factor receptor 4) (= CD334)
(see also Fibroblast growth factor receptor-like 1)
As a drug target
The FGF/FGFR signalling pathway is involved in a variety of cancers.
There are non-selective FGFR inhibitors that act on all of FGFR1-4 and other proteins, and some selective FGFR inhibitors for some/all of FGFR1-4. Selective FGFR inhibitors include AZD4547, BGJ398, JNJ42756493, and PD173074.
References
External links
GeneReviews/NIH/NCBI/UW entry on FGFR-Related Craniosynostosis Syndromes
FGF signaling (with refs)
Tyrosine kinase receptors | Fibroblast growth factor receptor | [
"Chemistry"
] | 760 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
5,977,290 | https://en.wikipedia.org/wiki/Tetrahedral%20prism | In geometry, a tetrahedral prism is a convex uniform 4-polytope. This 4-polytope has 6 polyhedral cells: 2 tetrahedra connected by 4 triangular prisms. It has 14 faces: 8 triangular and 6 square. It has 16 edges and 8 vertices.
It is one of 18 uniform polyhedral prisms created by using uniform prisms to connect pairs of parallel Platonic solids and Archimedean solids.
Images
Alternative names
Tetrahedral dyadic prism (Norman W. Johnson)
Tepe (Jonathan Bowers: for tetrahedral prism)
Tetrahedral hyperprism
Digonal antiprismatic prism
Digonal antiprismatic hyperprism
Structure
The tetrahedral prism is bounded by two tetrahedra and four triangular prisms. The triangular prisms are joined to each other via their square faces, and are joined to the two tetrahedra via their triangular faces.
Projections
The tetrahedron-first orthographic projection of the tetrahedral prism into 3D space has a tetrahedral projection envelope. Both tetrahedral cells project onto this tetrahedron, while the triangular prisms project to its faces.
The triangular-prism-first orthographic projection of the tetrahedral prism into 3D space has a projection envelope in the shape of a triangular prism. The two tetrahedral cells are projected onto the triangular ends of the prism, each with a vertex that projects to the center of the respective triangular face. An edge connects these two vertices through the center of the projection. The prism may be divided into three non-uniform triangular prisms that meet at this edge; these 3 volumes correspond with the images of three of the four triangular prismic cells. The last triangular prismic cell projects onto the entire projection envelope.
The edge-first orthographic projection of the tetrahedral prism into 3D space is identical to its triangular-prism-first parallel projection.
The square-face-first orthographic projection of the tetrahedral prism into 3D space has a cuboidal envelope (see diagram). Each triangular prismic cell projects onto half of the cuboidal volume, forming two pairs of overlapping images. The tetrahedral cells project onto the top and bottom square faces of the cuboid.
Related polytopes
It is the first in an infinite series of uniform antiprismatic prisms.
The tetrahedral prism, -131, is first in a dimensional series of uniform polytopes, expressed by Coxeter as k31 series. The tetrahedral prism is the vertex figure for the second, the rectified 5-simplex. The fifth figure is a Euclidean honeycomb, 331, and the final is a noncompact hyperbolic honeycomb, 431. Each uniform polytope in the sequence is the vertex figure of the next.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
Norman Johnson Uniform Polytopes, Manuscript (1991)
External links
Uniform 4-polytopes | Tetrahedral prism | [
"Physics"
] | 632 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
5,977,418 | https://en.wikipedia.org/wiki/Spiegelman%27s%20Monster | Spiegelman's Monster is an RNA chain of only 218 nucleotides that is able to be reproduced by the RNA replication enzyme RNA-dependent RNA polymerase, also called RNA replicase. It is named after its creator, Sol Spiegelman, of the University of Illinois at Urbana-Champaign who first described it in 1965.
Description
Spiegelman introduced RNA from a simple bacteriophage Qβ (Qβ) into a solution which contained Qβ's RNA replicase, some free nucleotides, and some salts. In this environment, the RNA started to be replicated. After a while, Spiegelman took some RNA and moved it to another tube with fresh solution. This process was repeated.
Shorter RNA chains were able to be replicated faster, so the RNA became shorter and shorter as selection favored speed. After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. This short RNA sequence replicated very quickly in these unnatural circumstances.
Further work
M. Sumper and R. Luce of Manfred Eigen's laboratory replicated the experiment, except without adding RNA, only RNA bases and Qβ replicase. They found that under the right conditions the Qβ replicase can spontaneously generate RNA which evolves into a form similar to Spiegelman's Monster.
Eigen built on Spiegelman's work and produced a similar system further degraded to just 48 or 54 nucleotides—the minimum required for the binding of the replication enzyme, this time a combination of HIV-1 reverse transcriptase and T7 RNA polymerase.
See also
Abiogenesis
RNA world hypothesis
PAH world hypothesis
Viroid
References
External links
ASA - January 2000: almost life
Not-so-Final Answers - The origin of life
Origin of life
RNA
Molecular evolution | Spiegelman's Monster | [
"Chemistry",
"Biology"
] | 378 | [
"Evolutionary processes",
"Origin of life",
"Molecular evolution",
"Molecular biology",
"Biological hypotheses"
] |
5,977,620 | https://en.wikipedia.org/wiki/Automaton%20clock | An automaton clock or automata clock is a type of striking clock featuring automatons. Clocks like these were built from the 1st century BC through to Victorian times in Europe. A Cuckoo clock is a simple form of this type of clock.
The first known mention is of those created by the Roman engineer Vitruvius, describing early alarm clocks working with gongs or trumpets. Later automatons usually perform on the hour, half-hour or quarter-hour, usually to strike bells. Common figures in older clocks include Death (as a reference to human mortality), Old Father Time, saints and angels. In the Regency and Victorian eras, common figures also included royalty, famous composers or industrialists.
More recently constructed automaton clocks are widespread in Japan, where they are known as karakuri-dokei. Notable examples of such clocks include the Ni-Tele Really Big Clock, designed by Hayao Miyazaki to be affixed on the Nippon Television headquarters in Tokyo, touted to be the largest animated clock in the world. In the United Kingdom, Kit Williams produced a series of large automaton clocks for a handful of British shopping centres, featuring frogs, ducks and fish. Seiko and Rhythm Clock are known for their battery-powered musical clocks, which frequently feature flashing lights, automatons and other moving parts designed to attract attention while in motion.
References
Mechanical engineering
Clock designs
Articles containing video clips
Karakuri | Automaton clock | [
"Physics",
"Technology",
"Engineering"
] | 295 | [
"Machines",
"Applied and interdisciplinary physics",
"Karakuri",
"Physical systems",
"Mechanical engineering"
] |
5,978,615 | https://en.wikipedia.org/wiki/Time-evolving%20block%20decimation | The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.
Introduction
Considering the inherent difficulties of simulating general quantum many-body systems, the exponential increase in parameters with the size of the system, and correspondingly, the high computational costs, one solution would be to look for numerical methods that deal with special cases, where one can profit from the physics of the system. The raw approach, by directly dealing with all the parameters used to fully characterize a quantum many-body system is seriously impeded by the lavishly exponential buildup with the system size of the amount of variables needed for simulation, which leads, in the best cases, to unreasonably long computational times and extended use of memory. To get around this problem a number of various methods have been developed and put into practice in the course of time, one of the most successful ones being the quantum Monte Carlo method (QMC). Also the density matrix renormalization group (DMRG) method, next to QMC, is a very reliable method, with an expanding community of users and an increasing number of applications to physical systems.
When the first quantum computer is plugged in and functioning, the perspectives for the field of computational physics will look rather promising, but until that day one has to restrict oneself to the mundane tools offered by classical computers. While experimental physicists are putting a lot of effort in trying to build the first quantum computer, theoretical physicists are searching, in the field of quantum information theory (QIT), for genuine quantum algorithms, appropriate for problems that would perform badly when trying to be solved on a classical computer, but pretty fast and successful on a quantum one. The search for such algorithms is still going, the best-known (and almost the only ones found) being the Shor's algorithm, for factoring large numbers, and Grover's search algorithm.
In the field of QIT one has to identify the primary resources necessary for genuine quantum computation. Such a resource may be responsible for the speedup gain in quantum versus classical, identifying them means also identifying systems that can be simulated in a reasonably efficient manner on a classical computer. Such a resource is quantum entanglement; hence, it is possible to establish a distinct lower bound for the entanglement needed for quantum computational speedups.
Guifré Vidal, then at the Institute for Quantum Information, Caltech, has recently proposed a scheme useful for simulating a certain category of quantum systems. He asserts that "any quantum computation with pure states can be efficiently simulated with a classical computer provided the amount of entanglement involved is sufficiently restricted".
This happens to be the case with generic Hamiltonians displaying local interactions, as for example, Hubbard-like Hamiltonians. The method exhibits a low-degree polynomial behavior in the increase of computational time with respect to the amount of entanglement present in the system. The algorithm is based on a scheme that exploits the fact that in these one-dimensional systems the eigenvalues of the reduced density matrix on a bipartite split of the system are exponentially decaying, thus allowing us to work in a re-sized space spanned by the eigenvectors corresponding to the eigenvalues we selected.
One can also estimate the amount of computational resources required for the simulation of a quantum system on a classical computer, knowing how the entanglement contained in the system scales with the size of the system. The classically (and quantum, as well) feasible simulations are those that involve systems only lightly entangled—the strongly entangled ones being, on the other hand, good candidates only for genuine quantum computations.
The numerical method is efficient in simulating real-time dynamics or calculations of ground states using imaginary-time evolution or isentropic interpolations between a target Hamiltonian and a Hamiltonian with an already-known ground state. The computational time scales linearly with the system size, hence many-particles systems in 1D can be investigated.
A useful feature of the TEBD algorithm is that it can be reliably employed for time evolution simulations of time-dependent Hamiltonians, describing systems that can be realized with cold atoms in optical lattices, or in systems far from equilibrium in quantum transport. From this point of view, TEBD had a certain ascendance over DMRG, a very powerful technique, but until recently not very well suited for simulating time-evolutions. With the Matrix Product States formalism being at the mathematical heart of DMRG, the TEBD scheme was adopted by the DMRG community, thus giving birth to the time dependent DMRG , t-DMRG for short.
Around the same time, other groups have developed similar approaches in which quantum information plays a predominant role as, for example, in DMRG implementations for periodic boundary conditions , and for studying mixed-state dynamics in one-dimensional quantum lattice systems,. Those last approaches actually provide a formalism that is more general than the original TEBD approach, as it also allows to deal with evolutions with matrix product operators; this enables the simulation of nontrivial non-infinitesimal evolutions as opposed to the TEBD case, and is a crucial ingredient to deal with higher-dimensional analogues of matrix product states.
The decomposition of state
Introducing the decomposition of State
Consider a chain of N qubits, described by the function . The most natural way of describing would be using the local -dimensional basis :
where M is the on-site dimension.
The trick of TEBD is to re-write the coefficients :
This form, known as a Matrix product state, simplifies the calculations greatly.
To understand why, one can look at the Schmidt decomposition of a state, which uses singular value decomposition to express a state with limited entanglement more simply.
The Schmidt decomposition
Consider the state of a bipartite system . Every such state can be represented in an appropriately chosen basis as:
where are formed with vectors that make an orthonormal basis in and, correspondingly, vectors , which form an orthonormal basis in , with the coefficients being real and positive, . This is called the Schmidt decomposition (SD) of a state. In general the summation goes up to . The Schmidt rank of a bipartite split is given by the number of non-zero Schmidt coefficients. If the Schmidt rank is one, the split is characterized by a product state. The vectors of the SD are determined up to a phase and the eigenvalues and the Schmidt rank are unique.
For example, the two-qubit state:
has the following SD:
with
On the other hand, the state:
is a product state:
Building the decomposition of state
At this point we know enough to try to see how we explicitly build the decomposition (let's call it D).
Consider the bipartite splitting . The SD has the coefficients and eigenvectors .
By expanding the 's in the local basis, one can write:
The process can be decomposed in three steps, iterated for each bond (and, correspondingly, SD) in the chain:
Step 1: express the 's in a local basis for qubit 2:
The vectors are not necessarily normalized.
Step 2: write each vector in terms of the at most (Vidal's emphasis) Schmidt vectors and, correspondingly, coefficients :
Step 3: make the substitutions and obtain:
Repeating the steps 1 to 3, one can construct the whole decomposition of state D. The last 's are a special case, like the first ones, expressing the right-hand Schmidt vectors at the bond in terms of the local basis at the lattice place. As shown in, it is straightforward to obtain the Schmidt decomposition at bond, i.e. , from D.
The Schmidt eigenvalues, are given explicitly in D:
The Schmidt eigenvectors are simply:
and
Rationale
Now, looking at D, instead of initial terms, there are . Apparently this is just a fancy way of rewriting the coefficients , but in fact there is more to it than that. Assuming that N is even, the Schmidt rank for a bipartite cut in the middle of the chain can have a maximal value of ; in this case we end up with at least coefficients, considering only the ones, slightly more than the initial ! The truth is that the decomposition D is useful when dealing with systems that exhibit a low degree of entanglement, which fortunately is the case with many 1D systems, where the Schmidt coefficients of the ground state decay in an exponential manner with :
Therefore, it is possible to take into account only some of the Schmidt coefficients (namely the largest ones), dropping the others and consequently normalizing again the state:
where is the number of kept Schmidt coefficients.
Let's get away from this abstract picture and refresh ourselves with a concrete example, to emphasize the advantage of making this decomposition. Consider for instance the case of 50 fermions in a ferromagnetic chain, for the sake of simplicity. A dimension of 12, let's say, for the would be a reasonable choice, keeping the discarded eigenvalues at % of the total, as shown by numerical studies, meaning roughly coefficients, as compared to the originally ones.
Even if the Schmidt eigenvalues don't have this exponential decay, but they show an algebraic decrease, we can still use D to describe our state . The number of coefficients to account for a faithful description of may be sensibly larger, but still within reach of eventual numerical simulations.
The update of the decomposition
One can proceed now to investigate the behaviour of the decomposition D when acted upon with one-qubit gates (OQG) and two-qubit gates (TQG) acting on neighbouring qubits. Instead of updating all the coefficients , we will restrict ourselves to a number of operations that increase in as a polynomial of low degree, thus saving computational time.
One-qubit gates acting on qubit k
The OQGs are affecting only the qubit they are acting upon, the update of the state after a unitary operator at qubit k does not modify the Schmidt eigenvalues or vectors on the left, consequently the 's, or on the right, hence the 's. The only 's that will be updated are the 's (requiring only at most operations), as
Two-qubit gates acting on qubits k, k+1
The changes required to update the 's and the 's, following a unitary operation V on qubits k, k+1, concern only , and .
They consist of a number of basic operations.
Following Vidal's original approach, can be regarded as belonging to only four subsystems:
The subspace J is spanned by the eigenvectors of the reduced density matrix :
In a similar way, the subspace K is spanned by the eigenvectors of the reduced density matrix:
The subspaces and belong to the qubits k and k + 1.
Using this basis and the decomposition D, can be written as:
Using the same reasoning as for the OQG, the applying the TQG V to qubits k, k + 1 one needs only to update , and
We can write as:
where
To find out the new decomposition, the new 's at the bond k and their corresponding Schmidt eigenvectors must be computed and expressed in terms of the 's of the decomposition D. The reduced density matrix is therefore diagonalized:
The square roots of its eigenvalues are the new 's.
Expressing the eigenvectors of the diagonalized matrix in the basis: the 's are obtained as well:
From the left-hand eigenvectors,
after expressing them in the basis , the 's are:
The computational cost
The dimension of the largest tensors in D is of the order ; when constructing the one makes the summation over , and for each , adding up to a total of operations. The same holds for the formation of the elements , or for computing the left-hand eigenvectors , a maximum of , respectively basic operations. In the case of qubits, , hence its role is not very relevant for the order of magnitude of the number of basic operations, but in the case when the on-site dimension is higher than two it has a rather decisive contribution.
The numerical simulation
The numerical simulation is targeting (possibly time-dependent) Hamiltonians of a system of particles arranged in a line, which are composed of arbitrary OQGs and TQGs:
It is useful to decompose as a sum of two possibly non-commuting terms, , where
Any two-body terms commute: ,
This is done to make the Suzuki–Trotter expansion (ST) of the exponential operator, named after Masuo Suzuki and Hale Trotter.
The Suzuki–Trotter expansion
The Suzuki–Trotter expansion of the first order (ST1) represents a general way of writing exponential operators:
or, equivalently
The correction term vanishes in the limit
For simulations of quantum dynamics it is useful to use operators that are unitary, conserving the norm (unlike power series expansions), and there's where the Trotter-Suzuki expansion comes in. In problems of quantum dynamics the unitarity of the operators in the ST expansion proves quite practical, since the error tends to concentrate in the overall phase, thus allowing us to faithfully compute expectation values and conserved quantities. Because the ST conserves the phase-space volume, it is also called a symplectic integrator.
The trick of the ST2 is to write the unitary operators as:
where . The number is called the Trotter number.
Simulation of the time-evolution
The operators , are easy to express, as:
since any two operators , (respectively, ,) commute for and an ST expansion of the first order keeps only the product of the exponentials, the approximation becoming, in this case, exact.
The time-evolution can be made according to
For each "time-step" , are applied successively to all odd sites, then to the even ones, and again to the odd ones; this is basically a sequence of TQG's, and it has been explained above how to update the decomposition when applying them.
Our goal is to make the time evolution of a state for a time T, towards the state using the n-particle Hamiltonian .
It is rather troublesome, if at all possible, to construct the decomposition for an arbitrary n-particle state, since this would mean one has to compute the Schmidt decomposition at each bond, to arrange the Schmidt eigenvalues in decreasing order and to choose the first and the appropriate Schmidt eigenvectors. Mind this would imply diagonalizing somewhat generous reduced density matrices, which, depending on the system one has to simulate, might be a task beyond our reach and patience.
Instead, one can try to do the following:
Error sources
The errors in the simulation are resulting from the Suzuki–Trotter approximation and the involved truncation of the Hilbert space.
Errors coming from the Suzuki–Trotter expansion
In the case of a Trotter approximation of order, the error is of order . Taking into account steps, the error after the time T is:
The unapproximated state is:
where is the state kept after the Trotter expansion and accounts for the part that is neglected when doing the expansion.
The total error scales with time as:
The Trotter error is independent of the dimension of the chain.
Errors coming from the truncation of the Hilbert space
Considering the errors arising from the truncation of the Hilbert space comprised in the decomposition D, they are twofold.
First, as we have seen above, the smallest contributions to the Schmidt spectrum are left away, the state being faithfully represented up to:
where is the sum of all the discarded eigenvalues of the reduced density matrix, at the bond .
The state is, at a given bond , described by the Schmidt decomposition:
where
is the state kept after the truncation and
is the state formed by the eigenfunctions corresponding to the smallest, irrelevant Schmidt coefficients, which are neglected.
Now, because they are spanned by vectors corresponding to orthogonal spaces. Using the same argument as for the Trotter expansion, the error after the truncation is:
After moving to the next bond, the state is, similarly:
The error, after the second truncation, is:
and so on, as we move from bond to bond.
The second error source enfolded in the decomposition is more subtle and requires a little bit of calculation.
As we calculated before, the normalization constant after making the truncation at bond is:
Now let us go to the bond and calculate the norm of the right-hand Schmidt vectors ; taking into account the full Schmidt dimension, the norm is:
where .
Taking into account the truncated space, the norm is:
Taking the difference, , we get:
Hence, when constructing the reduced density matrix, the trace of the matrix is multiplied by the factor:
The total truncation error
The total truncation error, considering both sources, is upper bounded by:
When using the Trotter expansion, we do not move from bond to bond, but between bonds of same parity; moreover, for the ST2, we make a sweep of the even ones and two for the odd. But nevertheless, the calculation presented above still holds. The error is evaluated by successively multiplying with the normalization constant, each time we build the reduced density matrix and select its relevant eigenvalues.
"Adaptive" Schmidt dimension
One thing that can save a lot of computational time without loss of accuracy is to use a different Schmidt dimension for each bond instead of a fixed one for all bonds, keeping only the necessary amount of relevant coefficients, as usual. For example, taking the first bond, in the case of qubits, the Schmidt dimension is just two. Hence, at the first bond, instead of futilely diagonalizing, let us say, 10 by 10 or 20 by 20 matrices, we can just restrict ourselves to ordinary 2 by 2 ones, thus making the algorithm generally faster. What we can do instead is set a threshold for the eigenvalues of the SD, keeping only those that are above the threshold.
TEBD also offers the possibility of straightforward parallelization due to the factorization of the exponential time-evolution operator using the Suzuki–Trotter expansion. A parallel-TEBD has the same mathematics as its non-parallelized counterpart, the only difference is in the numerical implementation.
References
Quantum mechanics
Computational physics | Time-evolving block decimation | [
"Physics"
] | 3,937 | [
"Theoretical physics",
"Quantum mechanics",
"Computational physics"
] |
5,979,294 | https://en.wikipedia.org/wiki/Reversed-phase%20chromatography | Reversed-phase liquid chromatography (RP-LC) is a mode of liquid chromatography in which non-polar stationary phase and polar mobile phases are used for the separation of organic compounds. The vast majority of separations and analyses using high-performance liquid chromatography (HPLC) in recent years are done using the reversed phase mode. In the reversed phase mode, the sample components are retained in the system the more hydrophobic they are.
The factors affecting the retention and separation of solutes in the reversed phase chromatographic system are as follows:
a. The chemical nature of the stationary phase, i.e., the ligands bonded on its surface, as well as their bonding density, namely the extent of their coverage.
b. The composition of the mobile phase. Type of the bulk solvents whose mixtures affect the polarity of the mobile phase, hence the name modifier for a solvent added to affect the polarity of the mobile phase.
C. Additives, such as buffers, affect the pH of the mobile phase, which affect the ionization state of the solutes and their polarity.
In order to retain the organic components in mixtures, the stationary phases, packed within columns, consist of a hydrophobic substrates, bonded to the surface of porous silica-gel particles in various geometries (spheric, irregular), at different diameters (sub-2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A). The particle's surface is covered by chemically bonded hydrocarbons, such as C3, C4, C8, C18 and more. The longer the hydrocarbon associated with the stationary phase, the longer the sample components will be retained. Some stationary phases are also made of hydrophobic polymeric particles, or hybridized silica-organic groups particles, for method in which mobile phases at extreme pH are used. Most current methods of separation of biomedical materials use C-18 columns, sometimes called by trade names, such as ODS (octadecylsilane) or RP-18.
The mobile phases are mixtures of water and polar organic solvents, the vast majority of which are methanol and acetonitrile. These mixtures usually contain various additives such as buffers (acetate, phosphate, citrate), surfactants (alkyl amines or alkyl sulfonates) and special additives (EDTA). The goal of using supplements of one kind or another is to increase efficiency, selectivity, and control solute retention.
Stationary phases
The history and evolution of reversed phase stationary phases in described in detail in an article by Majors, Dolan, Carr and Snyder.
In the 1970s, most liquid chromatography runs were performed using solid particles as the stationary phases, made of unmodified silica gel or alumina. This type of technique is now referred to as normal-phase chromatography. Since the stationary phase is hydrophilic in this technique, and the mobile phase is non-polar (consisting of organic solvents such as hexane and heptane), biomolecules with hydrophilic properties in the sample adsorb to the stationary phase strongly. Moreover, they were not dissolved easily in the mobile phase solvents. At the same time hydrophobic molecules experience less affinity to the polar stationary phase, and elute through it early with not enough retention. This was the reasons why during the 1970s the silica based particles were treated with hydrocarbons, immobilized or bonded on their surface, and the mobile phases were switched to aqueous and polar in nature, to accommodate biomedical substances.
The use of a hydrophobic stationary phase and polar mobile phases is essentially the reverse of normal phase chromatography, since the polarity of the mobile and stationary phases have been inverted – hence the term reversed-phase chromatography. As a result, hydrophobic molecules in the polar mobile phase tend to adsorb to the hydrophobic stationary phase, and hydrophilic molecules in the sample pass through the column and are eluted first. Hydrophobic molecules can be eluted from the column by decreasing the polarity of the mobile phase using an organic (non-polar) solvent, which reduces hydrophobic interactions. The more hydrophobic the molecule, the more strongly it will bind to the stationary phase, and the higher the concentration of organic solvent that will be required to elute the molecule.
Many of the mathematical parameters of the theory of chromatography and experimental considerations used in other chromatographic methods apply to RP-LC as well (for example, the selectivity factor, chromatographic resolution, plate count, etc. It can be used for the separation of a wide variety of molecules. It is typically used for separation of proteins, because the organic solvents used in normal-phase chromatography can denature many proteins.
Today, RP-LC is a frequently used analytical technique. There are huge variety of stationary phases available for use in RP-LC, allowing great flexibility in the development of the separation methods.
Silica-based stationary phases
Silica gel particles are commonly used as a stationary phase in high-performance liquid chromatography (HPLC) for several reasons, including:
High surface area: Silica gel particles have a high surface area, allowing direct interactions with solutes or after bonding of variety of ligands for versatile interactions with the sample molecules, leading to better separations.
Chemical and thermal stability and inertness: Silica gel is chemically stable, as it usually does not react with either the solvents of the mobile phase nor the compounds being separated, resulting in accurate, repeatable and reliable analyses.
Wide applicability: Silica gel is versatile and can be modified with various functional groups, making it suitable for a wide range of analytes and applications.
Efficient separation: The unique properties of silica gel particles, combined with their high surface area and controlled average particle diameter pore size, facilitate efficient and precise separation of compounds in HPLC.
Reproducibility: Silica gel particles can offer high batch-to-batch reproducibility, which is crucial for consistent and reliable HPLC analyses throughout decades.
Particle diameter and pore size control: Silica gel can be engineered to have specific pore sizes, enabling precise control over separation based on molecular size.
Cost-effectiveness: Silica is the most abundant element on earth, hence its gel is a cost-effective choice for HPLC applications, making it widely adopted in laboratories.
The United States Pharmacopoeia (USP) has classified HPLC columns by L# types. The most popular column in this classification is an octadecyl carbon chain (C18)-bonded silica (USP classification L1). This is followed by C8-bonded silica (L7), pure silica (L3), cyano-bonded silica (CN) (L10) and phenyl-bonded silica (L11). Note that C18, C8 and phenyl are dedicated reversed-phase stationary phases, while CN columns can be used in a reversed-phase mode depending on analyte and mobile phase conditions. Not all C18 columns have identical retention properties. Surface functionalization of silica can be performed in a monomeric or a polymeric reaction with different short-chain organosilanes used in a second step to cover remaining silanol groups (end-capping). While the overall retention mechanism remains the same, subtle differences in the surface chemistries of different stationary phases will lead to changes in selectivity.
Modern columns have different polarity depending on the ligand bonded to the stationary phase. PFP is pentafluorphenyl. CN is cyano. NH2 is amino. ODS is octadecyl or C18. ODCN is a mixed mode column consisting of C18 and nitrile.
Recent developments in chromatographic supports and instrumentation for liquid chromatography (LC) facilitate rapid and highly efficient separations, using various stationary phases geometries. Various analytical strategies have been proposed, such as the use of silica-based monolithic supports, elevated mobile phase temperatures, and columns packed with sub-3 μm superficially porous particles (fused or solid core) or with sub-2 μm fully porous particles for use in ultra-high-pressure LC systems (UHPLC).
Mobile phases
A comprehensive article on the modern trends and best practices of mobile phase selection in reversed-phase chromatography was published by Boyes and Dong. A mobile phase in reversed-phase chromatograpy consists of mixtures of water or aqueous buffers, to which organic solvents are added, to elute analytes from a reversed-phase column in a selective manner. The added organic solvents must be miscible with water, and the two most common organic solvents used are acetonitrile and methanol. Other solvents can also be used such as ethanol or 2-propanol (isopropyl alcohol) and tetrahydrofuran (THF). The organic solvent is called also a modifier, since it is added to the aqueous solution in the mobile phase in order to modify the polarity of the mobile phase. Water is the most polar solvent in the reversed phase mobile phase; therefore, lowering the polarity of the mobile phase by adding modifiers enhances its elution strength. The two most widely used organic modifiers are acetonitrile and methanol, although acetonitrile is the more popular choice. Isopropanol (2-propanol) can also be used, because of its strong eluting properties, but its use is limited by its high viscosity, which results in higher backpressures. Both acetonitrile and methanol are less viscous than isopropanol, although a mixture of 50:50 percent of methanol:water is also very viscous and causes high backpressures.
All three solvents are essentially UV transparent. This is a crucial property for common reversed phase chromatography since sample components are typically detected by UV detectors. Acetonitrile is more transparent than the others in low UV wavelengths range, therefore it is used almost exclusively when separating molecules with weak or no chromophores (UV-VIS absorbing groups), such as peptides. Most peptides only absorb at low wavelengths in the ultra-violet spectrum (typically less than 225 nm) and acetonitrile provides much lower background absorbance at low wavelengths than the other common solvents.
The pH of the mobile phase can have an important role on the retention of an analyte and can change the selectivity of certain analytes. For samples containing solutes with ionized functional groups, such as amines, carboxyls, phosphates, phosphonates, sulfates, and sulfonates, the ionization of these groups can be controlled using mobile phase buffers.
For example, carboxylic groups in solutes become increasingly negatively charged as the pH of the mobile phase rises above their pKa, hence the whole molecule becomes more polar and less retained on the a-polar stationary phase. In this case, raising the pH of the phase mobile above 4–5 = pH (which is the typical pKa range for carboxylic groups) increases their ionization, hence decreases their retention. Conversely, using a mobile phase at a pH lower than 4 will increase their retention, because it will decrease their ionization degree, rendering them less polar.
The same considerations apply to substances containing basic functional groups, such as amines, whose pKa ranges are around 8 and above, are retained more, as the pH of the mobile phase increases, approaching 8 and above, because they are less ionized, hence less polar. However, in the case of high pH mobile phases, most of the traditional silica gel based Reversed Phase columns are generally limited for use with mobile phases at pH 8 and above, therefore, control over the retention of amines in this range is limited.
The choice of buffer type is an important factor in RP-LC method development, as it can affect the retention, selectivity, and resolution of the analytes of interest. When selecting a buffer for RP-HPLC, there are a number of factors to consider, including:
The desired pH of the mobile phase: Buffers are most effective around their pKa value, so it is important to choose a buffer with a pKa that is close to the desired mobile phase pH needed.
The solubility of the buffer in the organic solvent: The buffer must be compatible with the organic solvent that is being used in the mobile phase, mostly with the common organic solvents mentioned above, acetonitrile, methanol, and isopropanol.
The UV cut-off of the buffer: In case of UV detection, the buffer should have a UV absorption that is below the detection wavelength of the analytes of interest. This will prevent the buffer from interfering with the detection of that analytes.
The compatibility of the buffer with the detector: If mass spectrometry (MS) is being used for detection, the buffer must be compatible with the mass spectrometry (MS) instrument. Some buffers, such as those containing phosphate salts, cannot be used with the MS detectors, as they are not volatile as needed, and they interfere with the MS detection by suppressing the analytes ionization, making them undetected by MS.
Some of the most common buffers used in RP-HPLC include:
Phosphate buffers: Phosphate buffer is versatile and can be used to achieve a wide range of pH values, thanks to 3 pKa values. They also have very low UV background for UV detection. However, they are not appropriate for MS detection.
Acetate buffers: Acetate buffers are also versatile and can be used to achieve range of pH values typically used in RP-LC. In terms of UV detection at sub 220 nm wavelength, it is not so favorable. The ammonium acetate buffer is compatible with MS.
Formate buffers: Formate buffers is similar to the acetate buffer in terms of range of pHs used and limited UV detection under 225 nm. Its ammonium acetate is also compatible with MS.
Ammonium buffers: Ammonium buffers are volatile and are often used in LC-MS methods. They also are limited for low UV detection.
Charged analytes can be separated on a reversed-phase column by the use of ion-pairing (also called ion-interaction). This technique is known as reversed-phase ion-pairing chromatography.
Elution can be performed isocratically (the water-solvent composition does not change during the separation process) or by using a solution gradient (the water-solvent composition changes during the separation process, usually by decreasing the polarity).
See also
Aqueous normal-phase chromatography
References
External links
Tables summarizing different types of reverse phases, and information on the functionalization process
Chromatography | Reversed-phase chromatography | [
"Chemistry"
] | 3,180 | [
"Chromatography",
"Separation processes"
] |
5,979,440 | https://en.wikipedia.org/wiki/Gas%20burner | A gas burner is a device that produces a non-controlled flame by mixing a fuel gas such as acetylene, natural gas, or propane with an oxidizer such as the ambient air or supplied oxygen, and allowing for ignition and combustion.
The flame is generally used for the heat, infrared radiation, or visible light it produces. Some burners, such as gas flares, dispose of unwanted or uncontainable flammable gases. Some burners are operated to produce carbon black.
The gas burner has many applications such as soldering, brazing, and welding, the latter using oxygen instead of air for producing a hotter flame, which is required for melting steel. Chemistry laboratories use natural-gas fueled Bunsen burners. In domestic and commercial settings gas burners are commonly used in gas stoves and cooktops. For melting metals with melting points of up to 1100 °C (such as copper, silver, and gold), a propane burner with a natural drag of air can be used. For higher temperatures, acetylene is commonly used in combination with oxygen.
Flame temperatures of common gases and fuels
The above data is given with the following assumptions:
The flame is adiabatic
The surrounding air is at 20 °C, 1 bar (atm)
Complete combustion (no soot, and more blue-like flame is the key) (Stoichiometric)
Peak Temperature These notes are not assumptions, and need more clarification:
Speed of Combustion (has no effect on temperature, but more energy released per second (as adiabatic) compared to normal flame)
Spectral bands also affect colour of flame, as of what part and elements of combustion
Blackbody radiation (colour appearance only because of heat)
Atmosphere - affects temperature of flame and colour due to the atmospheric colour effect
Flammability limits and ignition temperatures of common gases
(Atmosphere is air at 20 degrees Celsius.)
Combustion values of common gases
References
Pocket Guide to Fire and Arson Investigation, second edition, FM Global, Table 1, 2, and 3
Gas burner at the Encyclopædia Britannica
Burners
Fuel gas
Tools
Welding
Methane
Acetylene
Propane
Butane | Gas burner | [
"Chemistry",
"Engineering"
] | 446 | [
"Greenhouse gases",
"Methane",
"Mechanical engineering",
"Welding"
] |
14,162,612 | https://en.wikipedia.org/wiki/Chloramination | Chloramination is the treatment of drinking water with a chloramine disinfectant. Both chlorine and small amounts of ammonia are added to the water one at a time which react together to form chloramine (also called combined chlorine), a long lasting disinfectant. Chloramine disinfection is used in both small and large water treatment plants.
Use
In the United States, the maintenance of what is called a "residual" of disinfectant that stays in the water distribution system while it is delivered to people's homes is required by the Environmental Protection Agency (EPA).
The EPA regulations give two choices for disinfectant residual — chlorine or chloramine. Many major water agencies are changing to chloramine to better meet current and anticipated federal drinking water regulations and to protect the public health.
Chlorine versus chloramine
There are many similarities between chlorine and chloramine. Both provide effective residual disinfection with minimal risk to public health. Both are toxic to fish and amphibians. Both chlorine and chloramine react with other compounds in the water to form what are called "disinfection byproducts".
The difference is that chlorine forms many byproducts, including trihalomethanes (THM) and haloacetic acids (HAA), whereas chloramine forms a significantly lower amount of THMs and HAAs but also forms N-nitrosodimethylamine (NDMA). One of the principal benefits of chloramine is that its use reduces the overall levels of these regulated contaminants compared to chlorine.
Adverse effects
Chloramine is toxic to fish and amphibians. Chloramine, like chlorine, comes in direct contact with their bloodstream through fish gills and must be removed from water added to aquariums and fish ponds. It must also be removed from water prior to use in dialysis machines, since water comes into direct contact with the bloodstream during treatment. Since the 1980s, most dialysis machines are built with filters to remove chloramines.
Chloramine is generally considered a problem in brewing beer; like chlorine it can react with and change some of the natural plant flavors that make up the beer, and it may slow or alter the yeast. Because chloramine dissipates much more slowly than chlorine from water, beer-makers prefer carbon filtration and / or Campden tablets to neutralize it in the water.
People have no trouble digesting chlorine or chloramine at the levels found in public drinking water; this water is not introduced directly into the human bloodstream. In the United States, the United States Environmental Protection Agency set minimum and maximum health-based safe levels for chloramine in drinking water. Elsewhere, similar oversight agencies may set drinking water quality standards for chloramine.
Two home builders filed lawsuits against Moulton Niguel Water District in 2012, (in Orange County CA), arguing that pinhole leaks in copper water piping in their homes was due to faulty water treatment with chloramine. Pinhole leaks cause expensive damage to people's homes, and the builders claim that they must repipe houses at great expense to deal with the problem. Officials observed that only the two builders have filed suit, but as of late 2013 the number of lawsuits had expanded.
Nitrogenous disinfection by-products are liable to convert to nitrosamines by the action of chlorination and chloramination. Other NDBPs include halonitroalkanes, halonitriles, and haloamides.
Removing monochloramine from water
Chloramines should be removed from water for dialysis, aquariums, hydroponic applications, and homebrewing beer.. Chloramine must be removed from water prior to use in kidney dialysis machines because it can cause hemolytic anemia if it enters the blood stream. In hydroponic applications, chloramine stunts the growth of plants.
When a chemical or biological process that changes the chemistry of chloramines is used, it falls under reductive dechlorination. Other techniques use physical—not chemical—methods for removing chloramines.
Ultraviolet light
The use of ultraviolet light for chlorine or chloramine removal is an established technology that has been widely accepted in pharmaceutical, beverage, and dialysis applications. UV is also used for disinfection at aquatic facilities.
Ascorbic acid and sodium ascorbate
Ascorbic acid (vitamin C) and sodium ascorbate completely neutralize both chlorine and chloramine, but degrade in a day or two, which makes them usable only for short-term applications. SFPUC determined that 1000 mg of vitamin C tablets, crushed and mixed in with bath water, completely remove chloramine in a medium-size bathtub without significantly depressing pH.
Activated carbon
Activated carbon has been used for chloramine removal long before catalytic carbon, a form of activated carbon, became available; standard activated carbon requires a very long contact time, which means a large volume of carbon is needed. For thorough removal, up to four times the contact time of catalytic carbon may be required.
Most dialysis units now depend on granular activated carbon (GAC) filters, two of which should be placed in series so that chloramine breakthrough can be detected after the first one, before the second one fails. Additionally, sodium metabisulfite injection may be used in certain circumstances.
Campden tablets
Home brewers use reducing agents such as sodium metabisulfite or potassium metabisulfite (both proprietorially sold as Campden tablets) to remove chloramine from brewing fermented beverages. However, residual sulfite can cause off flavors in beer so potassium metabisulfite is preferred.
Sodium thiosulfate
Sodium thiosulfate is used to dechlorinate tapwater for aquariums or treat effluent from wastewater treatments prior to release into rivers. The reduction reaction is analogous to the iodine reduction reaction. Treatment of tapwater requires between 0.1 and 0.3 grams of pentahydrated (crystalline) sodium thiosulfate per 10 L of water. Many animals are sensitive to chloramine, and it must be removed from water given to many animals in zoos.
Other methods
Chloramine, like chlorine, can be removed by boiling and aging. However, time required to remove chloramine is much longer than that of chlorine. The time required to remove half of the chloramine (half-life) from of water by boiling is 26.6 minutes, whereas the half-life of free chlorine in boiling 10 gallons of water is only 1.8 minutes. Aging may take weeks to remove chloramines, whereas chlorine disappears in a few days.
References
External links
Chloramines in Drinking Water at EPA
Citizens Concerned About Chloramine (CCAC)
Water treatment
Chlorine | Chloramination | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,460 | [
"Water treatment",
"Environmental engineering",
"Water technology",
"Water pollution"
] |
14,167,920 | https://en.wikipedia.org/wiki/GABRB3 | Gamma-aminobutyric acid receptor subunit beta-3 is a protein that in humans is encoded by the GABRB3 gene. It is located within the 15q12 region in the human genome and spans 250kb. This gene includes 10 exons within its coding region. Due to alternative splicing, the gene codes for many protein isoforms, all being subunits in the GABAA receptor, a ligand-gated ion channel. The beta-3 subunit is expressed at different levels within the cerebral cortex, hippocampus, cerebellum, thalamus, olivary body and piriform cortex of the brain at different points of development and maturity. GABRB3 deficiencies are implicated in many human neurodevelopmental disorders and syndromes such as Angelman syndrome, Prader-Willi syndrome, nonsyndromic orofacial clefts, epilepsy and autism. The effects of methaqualone and etomidate are mediated through GABBR3 positive allosteric modulation.
Gene
The GABRB3 gene is located on the long arm of chromosome 15, within the q12 region in the human genome. It is located in a gene cluster, with two other genes, GABRG3 and GABRA5. GABRB3 was the first gene to be mapped to this particular region. It spans approximately 250kb and includes 10 exons within its coding region, as well as two additional alternative first exons that encode for signaling peptides. Alternatively spliced transcript variants encoding isoforms with distinct signal peptides have been described. This gene is located within an imprinting region that spans the 15q11-13 region. Its sequence is considerably longer than the two other genes found within its gene cluster due to a large 150kb intron it carries. A pattern is observed in GABRB3 gene replication, in humans the maternal allele is replicated later than the paternal allele. The reasoning and implications of this pattern are unknown.
When comparing the human beta-3 subunit's genetic sequence with other vertebrate beta-3 subunit sequences, there is a high level of genetic conservation. In mice the Gabrb3 gene is located on chromosome 7 of its genome in a similar gene cluster style with some of the other subunits of the GABAA receptor.
Function
GABRB3 encodes a member of the ligand-gated ion channel family. The encoded protein is one of at least 13 distinct subunits of a multisubunit chloride channel that serves as the receptor for gamma-aminobutyric acid, the major inhibitory neurotransmitter of the nervous system. The two other genes in the gene cluster both encode for related subunits of the family. During development, when the GABRB3 subunit functions optimally, its role in the GABAA receptor allows for proliferation, migration, and differentiation of precursor cells that lead to the proper development of the brain. GABAA receptor function is inhibited by zinc ions. The ions bind allosterically to the receptor, a mechanism that is critically dependent on the receptor subunit composition.
De novo heterozygous missense mutations within a highly conserved region of the GABRB3 gene can decrease the peak current amplitudes of neurons or alter the kinetic properties of the channel. This results in the loss of the inhibitory properties of the receptor.
The beta-3 subunit has very similar function to the human version of the subunit.
Structure
The crystal structure of a human β3 homopentamer was published in 2014. The study of the crystal structure of the human β3 homopentamer revealed unique qualities that are only observed in eukaryotic cysteine-loop receptors. The characterization of the GABAA receptor and subunits helps with the mechanistic determination of mutations within the subunits and what direct effect the mutations may have on the protein and its interactions.
Expression
The expression of GABRB3 is not constant among all cells or at all stages of development. The distribution of expression of the GABAA receptor subunits (GABRB3 included) during development indicates that GABA may function as a neurotrophic factor, impacting neural differentiation, growth, and circuit organization. The expression of the beta-3 subunit reaches peak at different times in different locations of the brain, during development. The highest expression of Gabrb3 in mice, within the cerebral cortex and hippocampus are reached prenatally, while they are reached postnatally in the cerebellar cortex. After the highest peak of expression, Gabrb3 expression is down-regulated substantially in the thalamus and inferior olivary body of the mouse. By adulthood, the level of expression in the cerebral cortex and hippocampus drops below developmental expression levels, but the expression in the cerebellum does not change postnatally. The highest levels of Gabrb3 expression in the mature mouse brain occur in the Purkinje and granule cells of the cerebellum, the hippocampus, and the piriform cortex.
In humans, the beta-3 subunit, as well as the subunits of its two neighbouring genes (GABRG3 and GABRA5), are bi-allelically expressed within the cerebral cortex, indicating that the gene is not subjected to imprinting within those cells.
Imprinting Patterns
Due to the location of GABRB3 in the 15q11-13 imprinting region found in humans, this gene is subject to imprinting depending on the location and the cells developmental state. Imprinting is not present in the mouse brain, having an equal expression from maternal and paternal alleles.
Regulation
Phosphorylation of the GABAA by cAMP-dependent protein kinase (PKA) has a regulatory effect dependent on the beta subunit involved. The mechanism by which the kinase is targeted towards the bata-3 subunit is unknown. AKAP79/150 binds directly to the GABRB3 subunit, which is critical for its own phosphorylation, mediated by PKA.
Gabrb3 shows significantly reduced expression postnatally, when mice are deficient in MECP2. When the MECP2 gene is knocked out, the expression of Gabrb3 is reduced, suggesting a relationship of positive regulation between the two genes.
Clinical significance
Mutations in this gene may be associated with the pathogenesis of Angelman syndrome, nonsyndromic orofacial clefts, epilepsy and autism. The GABRB3 gene has been associated with savant skills accompanying such disorders.
In mice, the knockout mutation of Gabrb3 causes severe neonatal mortality with the cleft palate phenotype present, the survivors experiencing hyperactivity, lack of coordination and suffering with epileptic seizures. These mice also exhibit changes to the vestibular system within the ear, resulting in poor swimming skills, difficulty in walking on grid floors, and are found to run in circles erratically.
Angelman syndrome
Deletion of the GABRB3 gene results in Angelman syndrome in humans, depending on the parental origin of the deletion. Deletion of the paternal allele of GABRB3 has no known implications with this syndrome, while deletion of the maternal GABRB3 allele results in development of the syndrome.
Nonsyndromic Orofacial Clefting
There is a strong association between GABRB3 expression levels and proper palate development. A disturbance in GABRB3 expression can be lined to the malformation of nonsyndromic cleft lip with or without cleft palate. Cleft lip and palate have also been observed in children who have inverted duplications encompassing the GABRB3 locus. Knockout of the beta-3 subunit in mice results in clefting of the secondary palate. Normal facial characteristics can be restored through the insertion of a Gabrb3 transgene into the mouse genome, making the Gabrb3 gene primarily responsible for cleft palate formation.
Autism Spectrum Disorder
Duplications of the Prader-Willi/Angelman syndrome region, also known as the imprinting region (15q11-13) that encompasses the GABRB3 gene are present in some patients diagnosed with Autism. These patients exhibit classic symptoms that are associated with the disorder. Duplications of the 15q11-13 region displayed in autistic patients are almost always of maternal origin (not paternal) and account for 1–2% of diagnosed autism disorder cases. This gene is also a candidate for autism because of the physiological response that benzodiazepine has on the GABA-A receptor, when used to treat seizures and anxiety disorders.
The Gabrb3 gene deficient mouse has been proposed as a model of autism spectrum disorder. These mice exhibit similar phenotypic symptoms such as non-selective attention, deficits in a variety of exploratory parameters, sociability, social novelty, nesting and lower rearing frequency as can be equated to characteristics found in patients diagnosed with autism spectrum disorder. When studying Gabrb3 deficient mice, significant hypoplasia of the cerebellar vermis was observed.
There is an unknown association between autism and the 155CA-2 locus, located within an intron in GABRB3.
Epilepsy/Childhood absence epilepsy
Defects in GABA transmission has often been implicated in epilepsy within animal models and human syndromes. Patients that are diagnosed with Angelman syndrome and have a deletion of the GABRB3 gene exhibit absence seizures. Reduced expression of the beta-3 subunit is a potential contributor to childhood absence epilepsy.
See also
GABAA receptor
Heritability of autism
References
Further reading
External links
Ion channels
Genetics of autism | GABRB3 | [
"Chemistry"
] | 2,042 | [
"Neurochemistry",
"Ion channels"
] |
14,168,085 | https://en.wikipedia.org/wiki/DNA%20ligase%204 | DNA ligase 4 also DNA ligase IV, is an enzyme that in humans is encoded by the LIG4 gene.
Function
DNA ligase 4 is an ATP-dependent DNA ligase that joins double-strand breaks during the non-homologous end joining pathway of double-strand break repair. It is also essential for V(D)J recombination. Lig4 forms a complex with XRCC4, and further interacts with the DNA-dependent protein kinase (DNA-PK) and XLF/Cernunnos, which are also required for NHEJ. The crystal structure of the Lig4/XRCC4 complex has been resolved. Defects in this gene are the cause of LIG4 syndrome. The yeast homolog of Lig4 is Dnl4.
LIG4 syndrome
In humans, deficiency of DNA ligase 4 results in a clinical condition known as LIG4 syndrome. This syndrome is characterized by cellular radiation sensitivity, growth retardation, developmental delay, microcephaly, facial dysmorphisms, increased disposition to leukemia, variable degrees of immunodeficiency and reduced number of blood cells.
Haematopoietic stem cell aging
Accumulation of DNA damage leading to stem cell exhaustion is regarded as an important aspect of aging. Deficiency of lig4 in pluripotent stem cells impairs Non-homologous end joining (NHEJ) and results in accumulation of DNA double-strand breaks and enhanced apoptosis. Lig4 deficiency in the mouse causes a progressive loss of haematopoietic stem cells and bone marrow cellularity during aging. The sensitivity of haematopoietic stem cells to lig4 deficiency suggests that lig4-mediated NHEJ is a key determinant of the ability of stem cells to maintain themselves against physiological stress over time.
Interactions
LIG4 has been shown to interact with XRCC4 via its BRCT domain. This interaction stabilizes LIG4 protein in cells; cells that are deficient for XRCC4, such as XR-1 cells, have reduced levels of LIG4.
Mechanism
LIG4 is an ATP-dependent DNA ligase. LIG4 uses ATP to adenylate itself and then transfers the AMP group to the 5' phosphate of one DNA end. Nucleophilic attack by the 3' hydroxyl group of a second DNA end and release of AMP yield the ligation product. Adenylation of LIG4 is stimulated by XRCC4 and XLF.
References
Further reading
DNA repair | DNA ligase 4 | [
"Biology"
] | 530 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
14,168,226 | https://en.wikipedia.org/wiki/List%20of%20gravity%20hills | This is a list of gravity hills and magnetic hills around the world.
A gravity hill is a place where a slight downhill slope appears to be an uphill slope due to the layout of the surrounding land, creating the optical illusion that water flows uphill or that a car left out of gear will roll uphill. Many of these sites have no specific name and are often called just "Gravity Hill", "Magnetic Hill", "Magic Road" or something similar.
Argentina
Buenos Aires Province: El camino misterioso, just outside Tandil. coordinates:
Chubut: On ruta provincial 12, connecting Gualjaina to Esquel. coordinates:
Jujuy: Locality Las Lajitas, on ruta provincial 56, between La Mendieta and Carahunco. coordinates:
Armenia
Aragatsotn Province: On the road to Lake Kari, on Mount Aragats.coordinates:
Australia
Bowen Mountain, in Bowen Mountain, New South Wales: Bowen Mountain Road, shortly after turnoff from Grose Vale Road. Known as Magnetic Mountain. Hill starts at intersection of Bowen Mountain Road and Westbury Roadcoordinates:
Forrestfield, Western Australia: Holmes Road, There is a right hand bend about 300 metres (yards) north of Whistlepipe Ct, then a gradual left hand bend. The location is between these two bends.coordinates:
Moonbi, New South Wales, near Tamworth: "Gravity Hill" near the Moonbi lookoutcoordinates:
Orroroo, South Australia: Magnetic Hill, on the road from Black Rock to Pekinacoordinates:
Woodend, Hanging Rock, Victoria: Straws Lanecoordinates:
Brisbane, Queensland: Spook Hill, on Mount View Road.coordinates:
Azerbaijan
Goygol District: Between Ganja and Goygol Lake.
Barbados
Gravity hill at Morgan Lewis, Saint Andrew.coordinates:
Belize
"Magnetic Hill" on the Hummingbird Highway.coordinates:
Brazil
Mato Grosso: On the MT-251 road between Cuiabá and Chapada Dos Guimarães.coordinates:
Belo Horizonte, Minas Gerais: Rua Professor Otávio Coelho de Magalhães (official name), popularly known as Rua do Amendoim (Portuguese for "Peanut Street"). coordinates:
São Thomé das Letras, Minas Gerais: Ladeira do Amendoim. coordinates:
Paraíba: On the BR-110, near Teixeira.coordinates:
Exu, Pernambuco: "Ladeira da Gameleira" on the BR-122 federal highway segment connecting Exu to Crato.coordinates:
Rio de Janeiro: Locality of Belvedere, near Petrópolis.coordinates:
Rio Grande do Norte: On the BR-226 federal highway segment between Jucurutu and Florâniacoordinates:
Santa Catarina: On the SC-110, between Bom Retiro and Urubici. coordinates:
Canada
Abbotsford, British Columbia: McKee Road just before Ledgeview Golf Course
Whonnock, Maple Ridge, British Columbia: Just south of 100th Avenue on 256th Street.coordinates:
Vernon, British Columbia: 5390 Dixon Dam Road.coordinates:
Benito, Manitoba: "Magnet Hill". Heading west after leaving P.R. No. 487 towards the Thunder Hill Ski Area, an apparent dip in the road provides the illusion.coordinates:
Moncton, New Brunswick: Magnetic Hill. One of the most well-known gravity hills worldwide.coordinates:
Bridgetown, Nova Scotia: on Hampton Mountain Road south of Valleyview Provincial Park
Burlington, Ontario: King Road, just north of Bayview Park. Stop car at "No Motorized Vehicles" sign, face south.coordinates:
Caledon, Ontario: Escarpment Sideroad just off Highway 10, at Escarpment’s intersection with the street leading to Devil’s Pulpit Golf Course.coordinates:
Dacre, Ontario: near the intersection of Highway 41, near intersection with Highway 132, known as Magnetic Hillcoordinates:
Sparta, Ontario: "Magnetic hill", on Centennial Road. coordinates:
Oshawa, Ontario: Ritson Road N (north of Raglan Road).coordinates:
Chartierville, Quebec: Magnetic Hill.coordinates:
Notre-Dame-Auxiliatrice-de-Buckland, Quebec: Route Saint-Louis (Côte magnétique de Buckland)coordinates:
Chile
Arica y Parinacota Region, Arica Province: Zona magnética, on ruta 11 (the Arica-Putre road). coordinates:
Easter Island (Rapa Nui, Isla de Pascua), Valparaíso Region: Punto magnético, near Anakena beach. coordinates:
Valparaíso Region: Arbol magnético or Algarrobo magnético, on the road connecting the town of Jahuel to the Termas de Jahuel Spa. The location is marked by a large Prosopis tree growing beside the road. coordinates:
China
Liaoning: The Strange Slope (or Magic Slope): an 80 metre (90 yard) long slope in Guaipo Resort, about 30 km (20 miles) to the north-east of the city of Shenyang. coordinates:
Liaoning: a slope near Liujia Wopeng Village, Huludao City.
Costa Rica
Alajuela Province: La cuesta magnética, on the Bijagua-Upala road. coordinates:
Cyprus
Paphos: Just after Droushia exit to Polis main road.coordinates:
Czech Republic
Close to the village of Kačerov near Zdobnice.coordinates:
On the road leading from Moravská Třebová to the village of Hřebeč, at the point where it branches off from route 35. coordinates:
Denmark
Bornholm: Magnetbakken. coordinates:
Dominican Republic
Polo: El Polo Magnético ("The Magnetic Pole")coordinates:
El Salvador
Sonsonate Department: On the CA 8W (Ruta de las flores), in Salcoatitan.coordinates:
France
Curiosité de Lauriole, Hérault, on a small road linking the D56 to the hamlet "Les Fournes", between Minerve and Siran, through Fauzan.coordinates:
Route magique des Noës, Loire department, Auvergne-Rhône-Alpes Region: Starting from Renaison go in the direction of Les Noës, continue for about 1 km (1000 yards) to the hamlet "Les Forges", and then turn left.coordinates:
La montée qui descend, Côte-d'Or Department, Bourgogne-Franche-Comté Region, near the village of Savigny-lès-Beaune.coordinates:
Rhône department, Auvergne-Rhône-Alpes Region, on the D43 road between La Poyebade and Odenas.coordinates:
Georgia
"Magic hill" in Tbilisi.coordinates:
Germany
On the L3053 between Butzbach and .coordinates:
Essen-Werden, North Rhine-Westphalia: On the three-way intersection of Klemensborn and Pastoratsberg roads, on the road part leading to the DJH Youth Hostel, starting just across the road from the public bus stop. coordinates:
Greece
Karya-Leptokarya road near Leivithra, mount Olympus, Pieria, Central Macedonia.coordinates:
Proti village (altitude 310 metres; 1020') - Analipseos ("Ascension") Monastery (altitude 930 metres; 3050') road, mount Pangaion, Serres, Central Macedonia. The spot where the effect starts is marked by signs painted on the road surface. This is a most convincing site: for a distance of about 40 meters (yards), the uphill-going road changes inclination to become very slightly downhill, before it becomes uphill again. Within this stretch of the road, the car will move slowly (which adds to the spookiness of the situation) on its own, and then it will come to a halt. What makes the site more striking than others is that the general direction of the spontaneous movement is clearly towards the higher ground, that is, towards the ascending slope of the mountain and the monastery.coordinates:
Just after the exit of a small tunnel, near Veroia, Imathia regional unit, Central Macedonia. Caution is advised, as, due to possible traffic exiting the tunnel, this is a somewhat risky spot to stop the car at, in order to experience the effect. coordinates:
Penteli-Agios Petros-Nea Makri road, mount Penteli, North Athens.coordinates:
On the Kalamata-Areopoli provincial road, Messinia regional unit, Peloponnisos.coordinates:
Island of Kythira. On the road to the monastery of Myrtidiotissa. Before the descent.
Guatemala
Sololá Department: Paso Misterioso, on ruta nacional 11, just outside the town of Santa Cruz Quixayá, in the direction of San Lucas Tolimán.coordinates:
Honduras
Yoro Department: On the CA-13 highway, just outside the town of Toyós.coordinates:
India
Chhattisgarh state:
Surguja district - Ulta Pani magnetic hill: also called Bisar paani magnetic hill, is located at Mainpat hill station 70 km (40 miles) south of Ambikapur city.coordinates:
Chhattisgarh state:
Kabirdham district - Dewanpatpar magnetic hill: is 40 km (25 miles) north of Pandariya on State Highway SH5 in Kabirdham district (formerly Kawardha district).coordinates:
Gujarat state:
Amreli district - Tulsishyam Anti Gravity Hill: 400 meters (yards) north of Tulsi Shyam Temple and 75 km (50 miles) south of Amreli city. coordinates:
Kutch district - Kalo Dungar magnetic hill: has a gravity hill optical illusion 5.2 km (3¼ miles) west of the Kutch Dattaterya Temple and 33 km (20 miles) northwest of Kutch city.
Ladakh union territory:
Leh district - Leh-Manali Magnetic Hill: is located 7.5 km (4¾ miles) southwest of Nimmoo on Leh on Manali-Leh highway.coordinates:
Maharashtra state:
Mumbai - Magnetic L&T flyover: it is an optical illusion on the Jogeshwari–Vikhroli Link Road (JVLR).coordinates:
Indonesia
Kelud mountain, Kediri Regency, East Java.coordinates:
Limpakuwus, Banyumas Regency, Central Java.coordinates:
Aceh Besar Regency, Sumatra.coordinates:
Iraq
Koy Sanjaq: On the drive toward Hotel Koya Palace.coordinates:
Ireland
County Louth: Cooley Peninsula, Jenkinstown, east of Dundalk, known locally as Magic Hill.coordinates:
County Sligo: Ballintrillickcoordinates:
County Tipperary: Slievenamon, overlooking Carrick-on-Suir and near Clonmel
County Waterford: Comeragh Mountains, on the road to the Mahon Fallscoordinates:
Isle of Man
Between Ronague and the Round Table, called Magnetic Hill.coordinates:
Israel
Near Amuka, Northern District.coordinates:
Jabel Mukaber, Jerusalem: The Enchanted Road.
Italy
Campania: Stradina Magica: a very small and narrow side road in Sala Consilina, Salerno province.coordinates:
Rome (Lazio): between Ariccia and Rocca di Papa on the strada regionale 218, called "Ariccia's downhill".coordinates:
Piedmont (Piemonte): Salita di Roccabruna, on the three-way intersection of strada provinciale 122 with the road leading to Sant' Anna. The road to the left appears as downhill although it is uphill.coordinates:
Trentino: The "mirage" slope in Montagnaga, again on a three-way intersection.coordinates:
Abruzzo: Village of Rosciolo dei Marsi near Avezzano, on the road leading to the church of Santa Maria in Valle Porclaneta.coordinates:
Apulia (Puglia): On a small side road beside strada statale 172, between Taranto and Martina Franca.coordinates:
Apulia (Puglia): On strada provinciale 39 between Corato and Poggiorsini.coordinates:
Sicily (Sicilia): Near the town of Santa Maria di Licodia, north of Paternò.coordinates:
Tuscany (Toscana): Between Filattiera and Caprio.coordinates:
Umbria: On the San Gemini Nord exit from strada statale 3 bis.coordinates:
Japan
Shikoku: Yashima Drive Way on Mt. Yashima, Takamatsu.coordinates:
A three-way intersection near Tōwa, Iwate Prefecture, where both roads are downhill but the right one appears as uphill.coordinates:
A three-way intersection near the town of Minamitane, Tanegashima island, Kagoshima prefecture.coordinates:
"Ghost slope" in Kume-Jima, Okinawa Prefecture.coordinates:
"Ghost slope" in Okagaki, Fukuoka Prefecture.
Jordan
Οn the road between Mount Nebo and the Dead Sea.coordinates:
Kenya
Machakos County: Kituluni Hill, also known as Kyamwilu, 12 kilometres (7 miles) from Machakos town along the Machakos-Kangundo road.coordinates:
Lebanon
Road near Hamat.coordinates:
Libya
Jafara district, near the town of Gharyan.
Lithuania
Auxiliary road to Kruonis Pumped Storage Plantcoordinates:
Malaysia
Sabah: Kimanis–Keningau Highway, about 28 km (17 miles) from Keningau.coordinates:
Mexico
Baja California: On an interchange on the Tijuana-Ensenada scenic highway.coordinates:
Chihuahua: On CHIH 42, just outside Santa Eulalia. coordinates:
Colima: Zona mágica, between Comala and Suchitlán. coordinates:
León, Guanajuato. On Cima del Sol Street, just when one enters from Campestre Boulevard.coordinates:
Puebla: Punto Marconi, between Metepec and Atlimeyaya. coordinates:
Veracruz: Just outside Acultzingo, on the road leading to Tehuacan.coordinates:
Veracruz: On the Santana-Los Atlixcos-Topilito road. coordinates:
Moldova
Orhei: an area near the M2 highway.coordinates:
Oman
Salalah: Anti-Gravity Pointcoordinates:
Panama
Volcán, Chiriquí road connecting Volcán with Cerro Puntacoordinates:
Philippines
Ternate, Cavite, Luzon island: On the Nasugbu-Ternate highway.coordinates:
Los Baños, Laguna, Luzon island: Mount Makiling, Jamboree road.coordinates:
Negros Island: On the Bacolod-San Carlos road, between Murcia and Don Salvador Benedicto.coordinates:
Poland
Karpacz, a small town in the Sudetes mountains in south-western Poland, a section of Strażacka Street; the illusion is referred to as a "gravitational anomaly" (Polish: anomalia grawitacji) and is a local tourist attraction.coordinates:
Żar Mountain (Pol. Góra Żar, also called Magiczna Góra) - a small mountain in Little Beskids in southern Poland; there is a road segment ca. long circling the mountain where "cars roll uphill"
The Czarodziejska Górka is a 200-meter (yard) stretch of the road between Strączno and Rutwica near Wałcz in north-west Poland where objects, including cars, roll uphill.coordinates:
Izersky antigravity point, near Świeradów-Zdrój, south-western Poland, marked by a stone like the one in Karpacz.coordinates:
Portugal
Braga: Three-way intersection behind Bom Jesus do Monte, where both roads are downhill but the left one (the side road) appears as uphill.coordinates:
Viseu: On the CM1225 road ascending to Serra da Arada.coordinates:
Romania
County of Maramureș: the road between Budești and Cavnic.coordinates:
Saudi Arabia
Wadi al-Jinn (Valley of Jinns) in Madinah North East of Masjid Al-Nabawi Actual name is Wadi Al Baidah. Wadi Al Jinn (Valley of Jins) name is given by the local tour operators and tourists.coordinates:
Serbia
In the village of Ivanje on Radan Mountain near Kuršumlija.coordinates:
Slovakia
On the road from Lipovce to Lačnov.coordinates:
South Africa
Kwazulu-Natal: On the R74 provincial route between Greytown and Weenen.coordinates:
Somerset West, City of Cape Town, Western Cape: Spook Hill, on Parel Vallei Road.coordinates:
Rensburg, Heidelberg, Gauteng: Die Spookbrug, on Plein Street.coordinates:
South Korea
Jeju Island: Dokebi Road ("Mysterious Road")coordinates:
Spain
Aragón, Province of Zaragoza: Cuesta mágica del Moncayo, near San Martín de la Virgen del Moncayo.coordinates:
Andalusia, Province of Málaga: Cuesta de Ronda, on the A-369 road near Ronda.coordinates:
Valencian Community, Province of Alicante: Cuesta mágica de Crevillente.coordinates:
Sweden
Idre: Trollvägen, Nipfjälletcoordinates:
Thailand
"Magic Hill" on the Mae Sot-Tak road (Route 12, a part of the Asian Highway 1).coordinates:
Trinidad and Tobago
The magnetic road: On the North Coast Road near Maracas Bay.coordinates:
Turkey
Bursa: On the Mudanya-Bursa highway (D575).coordinates:
Bursa: On a small road in the Uludağ ski resort.coordinates:
Erzurum: On Gizemli yol.coordinates:
Kırklareli: Demirköy Yolu, Yenice, Pınarhisar.coordinates:
United Kingdom
England
Buckinghamshire: Dancers End Lane, Aston Clinton.coordinates:
Essex: Hangman's Hill, High Beech, Epping Forest.coordinates:
West Sussex: Rogate, on the A272 road to the west of the village.
Northern Ireland
County Down: The Magic Hill. Mourne Mountains, on the B27 head southeast towards the Spelga Damcoordinates:
Scotland
South Ayrshire: Electric Brae, on the A719 between Dunure and Croy Braecoordinates:
Wales
Powys: Llangattock (Crickhowell), Brecon Beacons, approximately 3 miles (4.8 km) west of Llangattock.coordinates:
United States
Alabama
Gravity Hill Lane at Oak Grove (Talladega County).coordinates:
Henry's Hill near Mount Hope (Lawrence County).coordinates:
Burnt Mountain Gravity Hill
Scenic spot, Jasper, GA Pickens County, Georgia
Alaska
Anchorage: Upper Huffman Road, Hillside, Anchorage Borough.coordinates:
Arkansas
Dyer, near Alma.coordinates:
Helena: Sulphur Springs Road.coordinates:
California
Altadena, 1054 E. Loma Alta Dr.coordinates:
Moreno Valley, on Nason Street (while driving south, immediately after Elder St).coordinates:
Ocotillo, on the off ramp at the exit from I-8 westbound to Mountain Springs road. coordinates:
Rohnert Park (in Sonoma County): From U.S. Route 101 (US 101) freeway, take Rohnert Park Expressway east to Petaluma Hill Road, Petaluma Hill Road south to Roberts Road, Roberts Road east to Lichau Road. At an iron gate with the words "Gracias San Antonio" is the start of the gravity hill section (on the western slope of Sonoma Mountain near the Fairfield Osborn Preserve).coordinates:
Santa Cruz: 465 Mystery Spot Road off Branciforte Drive near State Route 17 (SR 17) within the redwood forest. A tourist attraction called "Mystery Spot" is operated at the site.
San Francisco: In Golden Gate Park, on JFK drive, just West of Transverse Drive, where the water from Rainbow Falls flows towards Lloyd Lake. Coordinates:
Sylmar: On Kagel Canyon Rd.coordinates:
Whittier: Site at cemetery in Whittier, on Workman Mill Road, near Rio Hondo College
Connecticut
Sterling: On Main Street, just before its intersection with Snake Meadow Hill Road.coordinates:
Florida
Lake Wales: Spook Hill, US 27 between Orlando and Tampacoordinates:
Georgia
Cumming: Booger Hill a.k.a. Booger Mountain
Fort Gaines: Spook Hill, north of towncoordinates:
Indiana
Mooresville: Gravity Hill.coordinates:
Kentucky
Covington: in Devou Park.coordinates:
Princeton: about west of Princeton off US 62 lies a gravity hill on Kentucky Route 2618 (KY 2618). Vehicles stop in the tunnel that the Western Kentucky Parkway bridge creates and the vehicle will roll all the way to the end of the road at the stop sign.coordinates:
Maryland
Burkittsville: On Gapland road (W Main Street).coordinates:
Massachusetts
Greenfield: Shelburne Road facing east towards Greenfield immediately after the Massachusetts Route 2 (Route 2) overpass.coordinates:
Michigan
Blaine Township (near Arcadia): Putney Road south of its intersection with Joyfield Road. Cars appear to roll toward the church at the intersection. Local legend claims the church is pulling sinners towards its doors for redemption.coordinates:
Rose City: at the end of Reasner Road, past Heath Roadcoordinates:
Farmington Hills: in Oakwood Cemetery, if you enter by the west gate and stop by a knotted tree, then put your car in neutral, it will appear to roll back uphill.
Minnesota
330th St and 660th Ave., Watkins (Forest City Township)
Mississippi
Burnsville
Missouri
Freeman: southwest of town near the Kansas state line at the intersection of Missouri Route D (Route D) and East 299th Street. Just outside Louisburg, Kansas.coordinates:
Montana
Columbia Falls: The Montana Vortex
New Jersey
Franklin Lakes: Ewing Avenue exit off Route 208 South
Jackson: New Prospect Roadcoordinates:
Titusville: On Pleasant Valley Road.coordinates:
New York
Yates County: Spook Hill, on Newell Road.coordinates:
Cattaraugus County: On Promised Land Road, Portville.coordinates:
Rockland County: On Spook Rock Road, at its intersection with US 202. Very convincing. coordinates:
North Carolina
Boone: Mystery Hill
Richfield: Gravity Hill on Richfield Rd. Local legends suggest paranormal causes for phenomenon.coordinates:
Scotland County: Gravity Hill
North Dakota
Near Sentinel Butte.
Ohio
Kirtland Hills: King Memorial Road coordinates:
Oklahoma
Bartlesville: Moose Lodge Road near the railway crossing.coordinates:
Springer: Near Ardmore, on Pitt Road.coordinates:
Oregon
Gold Hill: the Oregon Vortex
Pennsylvania
Near Lewisberry, York County, on Pleasantview Road (also rendered Pleasant View Road) at its intersection with Pennsylvania Route 177 (PA 177). This is an extremely convincing site.coordinates:
North Park, Pittsburgh: Intersection of McKinney Road and Kummer Road.coordinates:
New Paris: Gravity Hill Road (two gravity hills in this area).coordinates:
Near Uniontown: Laurel Caverns
South Dakota
Rapid City: Cosmos Mystery Area, south of Rapid City on US 16, from Mount Rushmore
Texas
El Paso: Thunderbird Drive facing south.coordinates:
San Antonio: Just east of the San Antonio Missions National Historical Park at the railway crossing at Villamain Road and Shane Road.coordinates:
Utah
Salt Lake City: A few blocks northeast of the Capitol building in Salt Lake City. A small road (Bonneville Boulevard) loops around a park called Memory Grove.coordinates:
Virginia
Danville: Berry Hill Road (US 311/SR 863) and Oak Hill Road (SR 862)coordinates:
Washington
Prosser: Near a small farm on North Crosby Road, approximately northeast of Prosser.coordinates:
Wisconsin
Shullsburg: Judgement Street, just after Rennick Road.coordinates:
Stockbridge: Joe Road, south of Stockbridge west off Wisconsin Highway 55 (WIS 55).coordinates:
Wyoming
Casper: Garden Creek Road, near the entrance to Rotary Park, when going toward Casper Mountain Road.coordinates:
Uruguay
Maldonado Department: Cumbres de la Ballena.
Uzbekistan
Near Boysun.coordinates:
References
Optical illusions | List of gravity hills | [
"Physics"
] | 5,099 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
14,169,590 | https://en.wikipedia.org/wiki/Super-LumiNova | Super-LumiNova is a brand name under which strontium aluminate–based non-radioactive and nontoxic photoluminescent or afterglow pigments for illuminating markings on watch dials, hands and bezels, etc. in the dark are marketed. When activated with a suitable dopant (Europium and Dysprosium), it acts as a photoluminescent phosphor with long persistence of phosphorescence. This technology offers up to ten times higher brightness than previous zinc sulfide–based materials.
These types of phosphorescent pigments, often called lume, operate like a rechargeable light battery. After sufficient activation by sunlight, fluorescent, LED, UV (blacklight), incandescent and other light sources, they glow in the dark for hours. Electrons within the pigment are being "excited" by ultraviolet light exposure—the excitation wavelengths for strontium aluminate range from 200 to 450 nm electromagnetic radiation—to a higher energetic state and after the excitation source is removed, fall back to their normal energetic state by releasing the energy loss as visible light over a period of time. Although fading over time, appropriately thick applicated larger markings remain visible for dark adapted human eyes for the whole night. This Ultraviolet light exposure induced activation and subsequent light emission process can be repeated again and again.
History
Nemoto & Co., Ltd. – a global manufacturer of phosphorescent pigments and other specialized phosphors – was founded by Kenzo Nemoto in December 1941 as a luminous paint processing company and has supplied and developed luminous paint to the watch and clock and aviation instruments industry since.
Super-LumiNova is based on LumiNova branded pigments, invented in 1993 by the Nemoto staff members Yoshihiko Murayama, Nobuyoshi Takeuchi, Yasumitsu Aoki and Takashi Matsuzawa as a safe replacement for radium-based luminous paints. The invention was patented in 1994 by Nemoto & Co., Ltd. and licensed to other manufacturers and watch brands.
In 1998 Nemoto & Co. established a join-venture with RC Tritec AG called LumiNova AG, Switzerland to manufacture 100 percent Swiss made afterglow pigments branded as Super-LumiNova. After that, the production of radioactive luminous compounds by RC Tritec AG was completely stopped. According to RC Tritec AG the Swiss watch brands all use their Super-LumiNova pigments.
Color variations and grades
Over time, RC Tritec AG developed other afterglow color variations than the original Nemoto & Co. C3 green and higher grades of afterglow pigments.
Any other Super-LumiNova emission color offering than C3 is achieved by adding colorants that adsorb light and hence limit the amount of light the afterglow pigment can absorb and emit. After the green glowing and pale yellow-green in daylight appearing C3 (emission at 515 nm) variant, the blue-green glowing and in daylight white appearing BGW9 (emission at 485 nm, close to the turquoise wavelength) color variant is the second most effective variant regarding pure afterglow brightness. Different colors can however be chosen to optimize (perceived) light emission, dictated by the human eye luminous efficiency function variance. Maximal light emission around wavelengths of 555 nm (green) is important for obtaining optimal photopic vision using the eye cone cells for observation in – or just coming from – well-lit conditions. Maximal light emission around wavelengths of 498 nm (cyan) is important for obtaining optimal scotopic vision using the eye rod cells for observation in low-light conditions. Besides technical and human eye dictated reasons, esthetic or other reasons can also influence Super-LumiNova color choices.
Super-LumiNova is offered in three grade levels; Standard, A and X1. The initial brightness of these grades does not significantly vary, but the light intensity decay over time of the A and X1 grades is significantly reduced. This means the X1 grade takes the longest to become too dim to be useful for the human eye. Not all Super-LumiNova color variations are available in three grades.
Stability
Due to the fact that no chemical change occurs after a charge-discharge cycle, the pigments theoretically retain their afterglow properties indefinitely. A reduction in light intensity only occurs very slowly, almost imperceptibly. This reduction increases with the degree of coloring of the pigments. Intensely colored types lose their intensity more quickly than neutral ones. High temperatures of up to several hundred degrees Celsius are not a problem. The only thing that needs to be avoided is prolonged contact with water or high humidity, as this creates a hydroxide layer that negatively affects the light emission intensity.
Uses
Besides being used in timepieces by industry and hobbyists, Super-LumiNova is also marketed for application on:
Instruments: scales, dials, markings, indicators, etc.
Scales: engravings, silkscreen-printing
Aviation instruments and markings
Jewelry
Safety- and emergency panels, signs, markings
Aiming posts
Various other parts
Application methods
Super-LumiNova granulated pigments are applied either by manual application, screen printing or pad printing. RC Tritec AG recommends up to application thickness in one or multiple layer(s). Over that, the ultraviolet light starts getting problems to effectively reach and activate the bottom of the deposited pigment, diminishing the returns for additional application thickness. The pigments and binders are produced separately, as there is no optimal binder for differing applications. This forces RC Tritec AG to offer many solvent and non-solvent based binder systems to maximally concentrate the granulated pigments in the mixture for application on various surfaces.
Alternatively, RC Tritec AG offers Lumicast pieces, which are highly concentrated luminous Super-LumiNova 3D-castings. According to RC Tritec AG these ceramic parts can be made in any customer desired shape and result in a higher light emission brightness when compared to the common application methods. Lumicast pieces can be glued or form fitted on various surfaces.
Alternative for afterglow pigments
By the late 1960s, radium was phased out and replaced with safer alternatives.
Tritium was used on and the original Panerai Luminor dive watch Radiomir and almost all Swiss watches from 1960 to 1998 when it was banned. Tritium-based substances ceased to be used by Omega SA in 1997.
In the 21st century, one radioluminescent alternative for afterglow pigments requiring radiation protection is being produced and used for watches and other uses. These are tritium-based devices called "gaseous tritium light source" (GTLS). GTLS are made using sturdy (often glass) containers internally coated with a phosphor layer and filled with tritium gas before the containers are permanently sealed. They have the advantage of being self-powered and producing a consistent luminosity that does not gradually fade during the night. However, GTLS contain radioactive tritium gas that has a half-life of slightly over 12.3 years. Additionally, phosphor degradation will cause the brightness of a tritium container to drop by more during that period. The more tritium that is initially inserted in the container, the brighter it is to begin with, and the longer its useful life. This means the intensity of the tritium-powered light source will slowly fade, generally becoming too dim to be useful for dark adapted human eyes after 20 to 30 years.
See also
Lumibrite — different named strontium aluminate–based phosphorescent pigments
Radium dial
References
External links
Technical Features Superluminova - Archived from the original
Luminosity in watches
Production of strontium aluminate
All About Super-LumiNova
Luminescence
Horology | Super-LumiNova | [
"Physics",
"Chemistry"
] | 1,619 | [
"Luminescence",
"Molecular physics",
"Physical quantities",
"Horology",
"Time",
"Spacetime"
] |
14,170,139 | https://en.wikipedia.org/wiki/P2RY2 | P2Y purinoceptor 2 is a protein that in humans is encoded by the P2RY2 gene.
The product of this gene, P2Y2 belongs to the family of G-protein coupled receptors. This family has several receptor subtypes with different pharmacological selectivity, which overlaps in some cases, for various adenosine and uridine nucleotides. This receptor is responsive to both adenosine and uridine nucleotides. It may participate in control of the cell cycle of endometrial carcinoma cells. Three transcript variants encoding the same protein have been identified for this gene.
See also
P2Y receptor
Denufosol, a P2Y2 agonist
References
Further reading
External links
G protein-coupled receptors | P2RY2 | [
"Chemistry"
] | 165 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,170,618 | https://en.wikipedia.org/wiki/Adenosine%20A3%20receptor | {{DISPLAYTITLE:Adenosine A3 receptor}}
The adenosine A3 receptor, also known as ADORA3, is an adenosine receptor, but also denotes the human gene encoding it.
Function
Adenosine A3 receptors are G protein-coupled receptors that couple to Gi/Gq and are involved in a variety of intracellular signaling pathways and physiological functions. It mediates a sustained cardioprotective function during cardiac ischemia, it is involved in the inhibition of neutrophil degranulation in neutrophil-mediated tissue injury, it has been implicated in both neuroprotective and neurodegenerative effects, and it may also mediate both cell proliferation and cell death.
Recent publications demonstrate that adenosine A3 receptor antagonists (SSR161421) could have therapeutic potential in bronchial asthma (17,18).
Gene
Multiple transcript variants encoding different isoforms have been found for this gene.
Therapeutic implications
An adenosine A3 receptor agonist (CF-101) is in clinical trials for the treatment of rheumatoid arthritis.
In a mouse model of infarction the A3 selective agonist CP-532,903 protected against myocardial ischemia and reperfusion injury.
Selective Ligands
A number of selective A3 ligands are available.
Agonists/Positive Allosteric Modulators
2-(1-Hexynyl)-N-methyladenosine
CF-101 (IB-MECA)
CF-102
2-Cl-IB-MECA
CP-532,903
Inosine
LUF-6000
MRS-3558
AST-004
Antagonists/Negative Allosteric Modulators
KF-26777
MRS-545
MRS-1191
MRS-1220
MRS-1334
MRS-1523
MRS-3777
MRE-3005-F20
MRE-3008-F20
PSB-11
OT-7999
VUF-5574
SSR161421
ISAM-DM10
Inverse Agonists
PSB-10
References
Further reading
External links
Adenosine receptors | Adenosine A3 receptor | [
"Chemistry"
] | 454 | [
"Adenosine receptors",
"Signal transduction"
] |
14,171,503 | https://en.wikipedia.org/wiki/NFYB | Nuclear transcription factor Y subunit beta is a protein that in humans is encoded by the NFYB gene.
Function
The protein encoded by this gene is one subunit of a trimeric complex, forming a highly conserved transcription factor that binds with high specificity to CCAAT motifs in the promoter regions in a variety of genes. This gene product, subunit B, forms a tight dimer with the C subunit, a prerequisite for subunit A association. The resulting trimer binds to DNA with high specificity and affinity. Subunits B and C each contain a histone-like motif. Observation of the histone nature of these subunits is supported by two types of evidence; protein sequence alignments and experiments with mutants.
Interactions
NFYB has been shown to interact with:
CEBPZ,
CNTN2,
Myc, and
TBP
References
Further reading
External links
Transcription factors | NFYB | [
"Chemistry",
"Biology"
] | 181 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,173,544 | https://en.wikipedia.org/wiki/Seproxetine | Seproxetine, also known as (S)-norfluoxetine, is a selective serotonin reuptake inhibitor (SSRI). It is the S enantiomer of norfluoxetine, the main active metabolite of the widely used antidepressant fluoxetine; it is nearly 4 times more selective for stimulating neurosteroid synthesis relative to serotonin reuptake inhibition than fluoxetine. It is formed through the demethylation, or removal of a methyl group, of fluoxetine. Seproxetine is both an inhibitor of serotonin and dopamine transporters, 5-HT2A and 5-HT2C receptors. It was being investigated by Eli Lilly and Company as an antidepressant; however, it inhibited the KvLQT1 protein, which is responsible for the management of the QT interval. This is the time it takes for the heart to contract and recover. Due to the inhibition, the QT interval was prolonged, which could lead to significant cardiac side complications. Due to this, development of the medication was discontinued. Tests on its efficacy found that it was equivalent to fluoxetine, but sixteen times more powerful than the R enantiomer of norfluoxetine.
References
Drugs developed by Eli Lilly and Company
Human drug metabolites
Trifluoromethyl compounds
Phenol ethers
Selective serotonin reuptake inhibitors
Abandoned drugs | Seproxetine | [
"Chemistry"
] | 316 | [
"Chemicals in medicine",
"Drug safety",
"Human drug metabolites",
"Abandoned drugs"
] |
14,173,546 | https://en.wikipedia.org/wiki/Oncogenic%20retroviridae%20protein | Oncogenic retroviridae proteins are retroviral proteins that have the ability to transform cells. They can induce sarcomas, leukaemias, lymphomas, and mammary carcinomas. These include the gag-onc fusion protein, rex, tax, v-fms, ras, v-myc, v-src, v-akt, v-cbl, v-crk, v-maf, v-abl, v-erbA, v-erbB, v-fos, v-mos, v-myb, v-raf, v-rel, and v-sis. The "v" prefix indicates viral genes which once originated as similarly named genes of the host species, but have since been altered through independent evolution as retroviral components. Not all retroviral proteins are oncogenic. The phrase was introduced as a MeSH term in 1990, under which over 6000 primary scientific publications are indexed.
See also
Oncogenic
References
Proteins | Oncogenic retroviridae protein | [
"Chemistry"
] | 216 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
11,574,361 | https://en.wikipedia.org/wiki/Computable%20model%20theory | Computable model theory is a branch of model theory which deals with questions of computability as they apply to model-theoretical structures.
Computable model theory introduces the ideas of computable and decidable models and theories and one of the basic problems is discovering whether or not computable or decidable models fulfilling certain model-theoretic conditions can be shown to exist.
Computable model theory was developed almost simultaneously by mathematicians in the West, primarily located in the United States and Australia, and Soviet Russia during the middle of the 20th century. Because of the Cold War there was little communication between these two groups and so a number of important results were discovered independently.
See also
Vaught conjecture
References
.
Constructivism (mathematics)
Model theory | Computable model theory | [
"Mathematics"
] | 155 | [
"Mathematical logic stubs",
"Mathematical logic",
"Constructivism (mathematics)",
"Model theory"
] |
11,578,174 | https://en.wikipedia.org/wiki/Air%20time%20%28rides%29 | In the context of amusement rides, air time, or airtime, refers to the time during which riders of a rollercoaster or other ride experience either frictionless or negative G-forces. The negative g-forces that a rider experiences is what creates the sensation the rider feels of floating out of their seat. With roller coasters, air time is usually achieved when the train travels over a hill at speed. There are different sensations a rider will feel depending on the ride being an ejector or floater airtime ride.
In 2001 the Guinness World Records recorded Superman: Escape from Krypton, located at Six Flags Magic Mountain, Valencia, California, one of the fastest roller coaster in the world, where riders experienced a then record 6.5 seconds of 'airtime' or negative G-force. Hypercoasters, such as Magnum XL-200 at Cedar Point, Behemoth at Canada's Wonderland, Superman the Ride at Six Flags New England, Shambhala at PortAventura Park and Goliath at Six Flags Over Georgia, along with many wooden roller coasters, such as Balder at Liseberg, The Voyage at Holiday World in Santa Claus, Indiana, and El Toro at Six Flags Great Adventure in Jackson, New Jersey, are rides known for having a particularly high total air time. Upon opening in 2018 at Cedar Point in Sandusky, Ohio, Steel Vengeance, the world's tallest and fastest hybrid coaster, set the record for the most airtime on a roller coaster at 27.2 seconds.
Physics
Air time is a result of the effects of the inertia of the train and the riders: as the train goes over a hill transitioning from an ascent into a descent guided by the rails, the inertia of the relatively loosely-attached riders causes them to momentarily continue upwards, resulting in the riders being lifted out of their seats. The duration of air time on a particular hill is dependent on the velocity of the train, gravity, and the radius of the track's transition from ascent to descent. Zero-G (where the net vertical G-force is 0) is achieved when the downward acceleration of the train is equal to that due to gravity; where the downward acceleration is greater, negative Gs arise.
The zero-gravity roll is a roll specifically designed to create the effect of weightlessness and thereby produce air time.
Air time is generally understood to fall under two categories: "floater" air time and "ejector" air time. Floater air time provides passengers with the sensation of gently floating upwards, which can be described as near perfect weightlessness. Ejector is more violent and sudden, producing a sharp moment of negative g-forces lifting riders up off their seats. Roller coasters built by the manufacturing company Rocky Mountain Construction are famous for providing ejector air time.
As well as rollercoasters, drop towers can provide the feeling of weightlessness. For example, in the case of The Twilight Zone Tower of Terror at Disney's Hollywood Studios, Tokyo DisneySea, and Disneyland Paris, the elevator drops riders faster than gravity normally would, causing them to rise off of their seats by several inches whilst being held down by only a seat belt, creating the sensation of zero-G. Most drop towers, however, have shoulder bars, preventing riders from rising significantly from their seats, even where negative Gs are present.
The motion-simulator ride Mission: SPACE at EPCOT also includes the sensation of weightlessness after takeoff, just as one enters space.
References
Roller coaster elements
Weightlessness
Acceleration | Air time (rides) | [
"Physics",
"Mathematics",
"Technology"
] | 727 | [
"Physical quantities",
"Acceleration",
"Roller coaster elements",
"Quantity",
"Wikipedia categories named after physical quantities",
"Components"
] |
11,578,785 | https://en.wikipedia.org/wiki/Jordan%E2%80%93Chevalley%20decomposition | In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part is potentially diagonalisable and the other is nilpotent. The two parts are polynomials in the operator, which makes them behave nicely in algebraic manipulations.
The decomposition has a short description when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than are needed for the existence of a Jordan normal form. Hence the Jordan–Chevalley decomposition can be seen as a generalisation of the Jordan normal form, which is also reflected in several proofs of it.
It is closely related to the Wedderburn principal theorem about associative algebras, which also leads to several analogues in Lie algebras. Analogues of the Jordan–Chevalley decomposition also exist for elements of Linear algebraic groups and Lie groups via a multiplicative reformulation. The decomposition is an important tool in the study of all of these objects, and was developed for this purpose.
In many texts, the potentially diagonalisable part is also characterised as the semisimple part.
Introduction
A basic question in linear algebra is whether an operator on a finite-dimensional vector space can be diagonalised. For example, this is closely related to the eigenvalues of the operator. In several contexts, one may be dealing with many operators which are not diagonalisable. Even over an algebraically closed field, a diagonalisation may not exist. In this context, the Jordan normal form achieves the best possible result akin to a diagonalisation. For linear operators over a field which is not algebraically closed, there may be no eigenvector at all. This latter point is not the main concern dealt with by the Jordan–Chevalley decomposition. To avoid this problem, instead potentially diagonalisable operators are considered, which are those that admit a diagonalisation over some field (or equivalently over the algebraic closure of the field under consideration).
The operators which are "the furthest away" from being diagonalisable are nilpotent operators. An operator (or more generally an element of a ring) is said to be nilpotent when there is some positive integer such that . In several contexts in abstract algebra, it is the case that the presence of nilpotent elements of a ring make them much more complicated to work with. To some extent, this is also the case for linear operators. The Jordan–Chevalley decomposition "separates out" the nilpotent part of an operator which causes it to be not potentially diagonalisable. So when it exists, the complications introduced by nilpotent operators and their interaction with other operators can be understood using the Jordan–Chevalley decomposition.
Historically, the Jordan–Chevalley decomposition was motivated by the applications to the theory of Lie algebras and linear algebraic groups, as described in sections below.
Decomposition of a linear operator
Let be a field, a finite-dimensional vector space over , and a linear operator over (equivalently, a matrix with entries from ). If the minimal polynomial of splits over (for example if is algebraically closed), then has a Jordan normal form . If is the diagonal of , let be the remaining part. Then is a decomposition where is diagonalisable and is nilpotent. This restatement of the normal form as an additive decomposition not only makes the numerical computation more stable, but can be generalised to cases where the minimal polynomial of does not split.
If the minimal polynomial of splits into distinct linear factors, then is diagonalisable. Therefore, if the minimal polynomial of is at least separable, then is potentially diagonalisable. The Jordan–Chevalley decomposition is concerned with the more general case where the minimal polynomial of is a product of separable polynomials.
Let be any linear operator on the finite-dimensional vector space over the field . A Jordan–Chevalley decomposition of is an expression of it as a sum
,
where is potentially diagonalisable, is nilpotent, and .
Several proofs are discussed in (). Two arguments are also described below.
If is a perfect field, then every polynomial is a product of separable polynomials (since every polynomial is a product of its irreducible factors, and these are separable over a perfect field). So in this case, the Jordan–Chevalley decomposition always exists. Moreover, over a perfect field, a polynomial is separable if and only if it is square-free. Therefore an operator is potentially diagonalisable if and only if its minimal polynomial is square-free. In general (over any field), the minimal polynomial of a linear operator is square-free if and only if the operator is semisimple. (In particular, the sum of two commuting semisimple operators is always semisimple over a perfect field. The same statement is not true over general fields.) The property of being semisimple is more relevant than being potentially diagonalisable in most contexts where the Jordan–Chevalley decomposition is applied, such as for Lie algebras. For these reasons, many texts restrict to the case of perfect fields.
Proof of uniqueness and necessity
That and are polynomials in implies in particular that they commute with any operator that commutes with . This observation underlies the uniqueness proof.
Let be a Jordan–Chevalley decomposition in which and (hence also) are polynomials in . Let be any Jordan–Chevalley decomposition. Then , and both commute with , hence with since these are polynomials in . The sum of commuting nilpotent operators is again nilpotent, and the sum of commuting potentially diagonalisable operators again potentially diagonalisable (because they are simultaneously diagonalizable over the algebraic closure of ). Since the only operator which is both potentially diagonalisable and nilpotent is the zero operator it follows that .
To show that the condition that have a minimal polynomial which is a product of separable polynomials is necessary, suppose that is some Jordan–Chevalley decomposition. Letting be the separable minimal polynomial of , one can check using the binomial theorem that can be written as where is some polynomial in . Moreover, for some , . Thus and so the minimal polynomial of must divide . As is a product of separable polynomials (namely of copies of ), so is the minimal polynomial.
Concrete example for non-existence
If the ground field is not perfect, then a Jordan–Chevalley decomposition may not exist, as it is possible that the minimal polynomial is not a product of separable polynomials. The simplest such example is the following. Let be a prime number, let be an imperfect field of characteristic (e. g. ) and choose that is not a th power. Let let be the image in the quotient and let be the -linear operator given by multiplication by in . Note that the minimal polynomial is precisely , which is inseparable and a square. By the necessity of the condition for the Jordan–Chevalley decomposition (as shown in the last section), this operator does not have a Jordan–Chevalley decomposition. It can be instructive to see concretely why there is at least no decomposition into a square-free and a nilpotent part.
Note that has as its invariant -linear subspaces precisely the ideals of viewed as a ring, which correspond to the ideals of containing . Since is irreducible in ideals of are and Suppose for commuting -linear operators and that are respectively semisimple (just over , which is weaker than semisimplicity over an algebraic closure of and also weaker than being potentially diagonalisable) and nilpotent. Since and commute, they each commute with and hence each acts -linearly on . Therefore and are each given by multiplication by respective members of and with . Since is nilpotent, is nilpotent in therefore in for is a field. Hence, therefore for some polynomial . Also, we see that . Since is of characteristic we have . On the other hand, since in we have therefore in Since we have Combining these results we get This shows that generates as a -algebra and thus the -stable -linear subspaces of are ideals of i.e. they are and We see that is an -invariant subspace of which has no complement -invariant subspace, contrary to the assumption that is semisimple. Thus, there is no decomposition of as a sum of commuting -linear operators that are respectively semisimple and nilpotent.
If instead of with the polynomial , the same construction is performed with , the resulting operator still does not admit a Jordan–Chevalley decomposition by the main theorem. However, is semi-simple. The trivial decomposition hence expresses as a sum of a semisimple and a nilpotent operator, both of which are polynomials in .
Elementary proof of existence
This construction is similar to Hensel's lemma in that it uses an algebraic analogue of Taylor's theorem to find an element with a certain algebraic property via a variant of Newton's method. In this form, it is taken from ().
Let have minimal polynomial and assume this is a product of separable polynomials. This condition is equivalent to demanding that there is some separable such that and for some . By the Bézout lemma, there are polynomials and such that . This can be used to define a recursion , starting with . Letting be the algebra of operators which are polynomials in , it can be checked by induction that for all :
because in each step, a polynomial is applied,
because for some (by the algebraic version of Taylor's theorem). By definition of as well as of and , this simplifies to , which indeed lies in by induction hypothesis,
because and both terms are in , the first by the preceding point and the second by induction hypothesis.
Thus, as soon as , by the second point since and , so the minimal polynomial of will divide and hence be separable. Moreover, will be a polynomial in by the first point and will be nilpotent by the third point (in fact, ). Therefore, is then the Jordan–Chevalley decomposition of . Q.E.D.
This proof, besides being completely elementary, has the advantage that it is algorithmic: By the Cayley–Hamilton theorem, can be taken to be the characteristic polynomial of , and in many contexts, can be determined from . Then can be determined using the Euclidean algorithm. The iteration of applying the polynomial to the matrix then can be performed until either (because then all later values will be equal) or exceeds the dimension of the vector space on which is defined (where is the number of iteration steps performed, as above).
Proof of existence via Galois theory
This proof, or variants of it, is commonly used to establish the Jordan–Chevalley decomposition. It has the advantage that it is very direct and describes quite precisely how close one can get to a Jordan–Chevalley decomposition: If is the splitting field of the minimal polynomial of and is the group of automorphisms of that fix the base field , then the set of elements of that are fixed by all elements of is a field with inclusions (see Galois correspondence). Below it is argued that admits a Jordan–Chevalley decomposition over , but not any smaller field. This argument does not use Galois theory. However, Galois theory is required deduce from this the condition for the existence of the Jordan-Chevalley given above.
Above it was observed that if has a Jordan normal form (i. e. if the minimal polynomial of splits), then it has a Jordan Chevalley decomposition. In this case, one can also see directly that (and hence also ) is a polynomial in . Indeed, it suffices to check this for the decomposition of the Jordan matrix . This is a technical argument, but does not require any tricks beyond the Chinese remainder theorem.
In the Jordan normal form, we have written where is the number of Jordan blocks and is one Jordan block. Now let be the characteristic polynomial of . Because splits, it can be written as , where is the number of Jordan blocks, are the distinct eigenvalues, and are the sizes of the Jordan blocks, so . Now, the Chinese remainder theorem applied to the polynomial ring gives a polynomial satisfying the conditions
(for all i).
(There is a redundancy in the conditions if some is zero but that is not an issue; just remove it from the conditions.) The condition , when spelled out, means that for some polynomial . Since is the zero map on , and agree on each ; i.e., . Also then with . The condition ensures that and have no constant terms. This completes the proof of the theorem in case the minimal polynomial of splits.
This fact can be used to deduce the Jordan–Chevalley decomposition in the general case. Let be the splitting field of the minimal polynomial of , so that does admit a Jordan normal form over . Then, by the argument just given, has a Jordan–Chevalley decomposition where is a polynomial with coefficients from , is diagonalisable (over ) and is nilpotent.
Let be a field automorphism of which fixes . Then
Here is a polynomial in , so is . Thus, and commute. Also, is potentially diagonalisable and is nilpotent. Thus, by the uniqueness of the Jordan–Chevalley decomposition (over ), and . Therefore, by definition, are endomorphisms (represented by matrices) over . Finally, since contains an -basis that spans the space containing , by the same argument, we also see that has coefficients in . Q.E.D.
If the minimal polynomial of is a product of separable polynomials, then the field extension is Galois, meaning that .
Relations to the theory of algebras
Separable algebras
The Jordan–Chevalley decomposition is very closely related to the Wedderburn principal theorem in the following formulation:
Usually, the term „separable“ in this theorem refers to the general concept of a separable algebra and the theorem might then be established as a corollary of a more general high-powered result. However, if it is instead interpreted in the more basic sense that every element have a separable minimal polynomial, then this statement is essentially equivalent to the Jordan–Chevalley decomposition as described above. This gives a different way to view the decomposition, and for instance takes this route for establishing it.
To see how the Jordan–Chevalley decomposition follows from the Wedderburn principal theorem, let be a finite-dimensional vector space over the field , an endomorphism with a minimal polynomial which is a product of separable polynomials and the subalgebra generated by . Note that is a commutative Artinian ring, so is also the nilradical of . Moreover, is separable, because if , then for minimal polynomial , there is a separable polynomial such that and for some . Therefore , so the minimal polynomial of the image divides , meaning that it must be separable as well (since a divisor of a separable polynomial is separable). There is then the vector-space decomposition with separable. In particular, the endomorphism can be written as where and . Moreover, both elements are, like any element of , polynomials in .
Conversely, the Wedderburn principal theorem in the formulation above is a consequence of the Jordan–Chevalley decomposition. If has a separable subalgebra such that , then is separable. Conversely, if is separable, then any element of is a sum of a separable and a nilpotent element. As shown above in #Proof of uniqueness and necessity, this implies that the minimal polynomial will be a product of separable polynomials. Let be arbitrary, define the operator , and note that this has the same minimal polynomial as . So it admits a Jordan–Chevalley decomposition, where both operators are polynomials in , hence of the form for some which have separable and nilpotent minimal polynomials, respectively. Moreover, this decomposition is unique. Thus if is the subalgebra of all separable elements (that this is a subalgebra can be seen by recalling that is separable if and only if is potentially diagonalisable), (because is the ideal of nilpotent elements). The algebra is separable and semisimple by assumption.
Over perfect fields, this result simplifies. Indeed, is then always separable in the sense of minimal polynomials: If , then the minimal polynomial is a product of separable polynomials, so there is a separable polynomial such that and for some . Thus . So in , the minimal polynomial of divides and is hence separable. The crucial point in the theorem is then not that is separable (because that condition is vacuous), but that it is semisimple, meaning its radical is trivial.
The same statement is true for Lie algebras, but only in characteristic zero. This is the content of Levi’s theorem. (Note that the notions of semisimple in both results do indeed correspond, because in both cases this is equivalent to being the sum of simple subalgebras or having trivial radical, at least in the finite-dimensional case.)
Preservation under representations
The crucial point in the proof for the Wedderburn principal theorem above is that an element corresponds to a linear operator with the same properties. In the theory of Lie algebras, this corresponds to the adjoint representation of a Lie algebra . This decomposed operator has a Jordan–Chevalley decomposition . Just as in the associative case, this corresponds to a decomposition of , but polynomials are not available as a tool. One context in which this does makes sense is the restricted case where is contained in the Lie algebra of the endomorphisms of a finite-dimensional vector space over the perfect field . Indeed, any semisimple Lie algebra can be realised in this way.
If is the Jordan decomposition, then is the Jordan decomposition of the adjoint endomorphism on the vector space . Indeed, first, and commute since . Second, in general, for each endomorphism , we have:
If , then , since is the difference of the left and right multiplications by y.
If is semisimple, then is semisimple, since semisimple is equivalent to potentially diagonalisable over a perfect field (if is diagonal over the basis , then is diagonal over the basis consisting of the maps with and for ).
Hence, by uniqueness, and .
The adjoint representation is a very natural and general representation of any Lie algebra. The argument above illustrates (and indeed proves) a general principle which generalises this: If is any finite-dimensional representation of a semisimple finite-dimensional Lie algebra over a perfect field, then preserves the Jordan decomposition in the following sense: if , then and .
Nilpotency criterion
The Jordan decomposition can be used to characterize nilpotency of an endomorphism. Let k be an algebraically closed field of characteristic zero, the endomorphism ring of k over rational numbers and V a finite-dimensional vector space over k. Given an endomorphism , let be the Jordan decomposition. Then is diagonalizable; i.e., where each is the eigenspace for eigenvalue with multiplicity . Then for any let be the endomorphism such that is the multiplication by . Chevalley calls the replica of given by . (For example, if , then the complex conjugate of an endomorphism is an example of a replica.) Now,
Proof: First, since is nilpotent,
.
If is the complex conjugation, this implies for every i. Otherwise, take to be a -linear functional followed by . Applying that to the above equation, one gets:
and, since are all real numbers, for every i. Varying the linear functionals then implies for every i.
A typical application of the above criterion is the proof of Cartan's criterion for solvability of a Lie algebra. It says: if is a Lie subalgebra over a field k of characteristic zero such that for each , then is solvable.
Proof: Without loss of generality, assume k is algebraically closed. By Lie's theorem and Engel's theorem, it suffices to show for each , is a nilpotent endomorphism of V. Write . Then we need to show:
is zero. Let . Note we have: and, since is the semisimple part of the Jordan decomposition of , it follows that is a polynomial without constant term in ; hence, and the same is true with in place of . That is, , which implies the claim given the assumption.
Real semisimple Lie algebras
In the formulation of Chevalley and Mostow, the additive decomposition states that an element X in a real semisimple Lie algebra g with Iwasawa decomposition g = k ⊕ a ⊕ n can be written as the sum of three commuting elements of the Lie algebra X = S + D + N, with S, D and N conjugate to elements in k, a and n respectively. In general the terms in the Iwasawa decomposition do not commute.
Multiplicative decomposition
If is an invertible linear operator, it may be more convenient to use a multiplicative Jordan–Chevalley decomposition. This expresses as a product
,
where is potentially diagonalisable, and is nilpotent (one also says that is unipotent).
The multiplicative version of the decomposition follows from the additive one since, as is invertible (because the sum of an invertible operator and a nilpotent operator is invertible)
and is unipotent. (Conversely, by the same type of argument, one can deduce the additive version from the multiplicative one.)
The multiplicative version is closely related to decompositions encountered in a linear algebraic group. For this it is again useful to assume that the underlying field is perfect because then the Jordan–Chevalley decomposition exists for all matrices.
Linear algebraic groups
Let be a linear algebraic group over a perfect field. Then, essentially by definition, there is a closed embedding . Now, to each element , by the multiplicative Jordan decomposition, there are a pair of a semisimple element and a unipotent element a priori in such that . But, as it turns out, the elements can be shown to be in (i.e., they satisfy the defining equations of G) and that they are independent of the embedding into ; i.e., the decomposition is intrinsic.
When G is abelian, is then the direct product of the closed subgroup of the semisimple elements in G and that of unipotent elements.
Real semisimple Lie groups
The multiplicative decomposition states that if g is an element of the corresponding connected semisimple Lie group G with corresponding Iwasawa decomposition G = KAN, then g can be written as the product of three commuting elements g = sdu with s, d and u conjugate to elements of K, A and N respectively. In general the terms in the Iwasawa decomposition g = kan do not commute.
References
(preprint)
Linear algebra
Lie algebras
Algebraic groups
Matrix decompositions | Jordan–Chevalley decomposition | [
"Mathematics"
] | 4,897 | [
"Linear algebra",
"Algebra"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.