id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
41,579,730 | https://en.wikipedia.org/wiki/Acifran | Acifran is a niacin receptor agonist.
References
Receptor agonists
Carboxylic acids
Enones | Acifran | [
"Chemistry"
] | 26 | [
"Receptor agonists",
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Neurochemistry",
"Organic compound stubs",
"Organic chemistry stubs"
] |
41,579,742 | https://en.wikipedia.org/wiki/Aconiazide | Aconiazide is an anti-tuberculosis medication. It is a prodrug of isoniazide that was developed and studied for its lower toxicity, but it does not appear to be marketed anywhere in the world in 2021.
References
Prodrugs
Carboxylic acids
Hydrazides
4-Pyridyl compounds | Aconiazide | [
"Chemistry"
] | 69 | [
"Chemicals in medicine",
"Carboxylic acids",
"Functional groups",
"Prodrugs"
] |
41,579,756 | https://en.wikipedia.org/wiki/Actaplanin | Actaplanin is a complex of broad-spectrum antibiotics made by Actinoplanes bacteria. Research carried out by a group in Eli Lilly and Co. in 1984 identified several actaplanins using high-performance liquid chromatography. Actaplanins A, B1, B2, B3, C1 and G were shown to be composed of the same peptide core, an amino sugar, and varying amounts of glucose, mannose, and rhamnose.
See also
Ristocetin (contains the same amino sugar as in actaplanin)
References
Antibiotics
Glycopeptide antibiotics
Heterocyclic compounds with 7 or more rings | Actaplanin | [
"Chemistry",
"Biology"
] | 136 | [
"Biotechnology products",
"Glycopeptides",
"Antibiotics",
"Glycopeptide antibiotics",
"Biocides",
"Organic chemistry stubs"
] |
34,853,876 | https://en.wikipedia.org/wiki/Late-life%20mortality%20deceleration | In gerontology, late-life mortality deceleration is the disputed theory that hazard rate increases at a decreasing rate in late life rather than increasing exponentially as in the Gompertz law.
Late-life mortality deceleration is a well-established phenomenon in insects, which often spend much of their lives in a constant hazard rate region, but it is much more controversial in mammals. Rodent studies have found varying conclusions, with some finding short-term periods of mortality deceleration in mice, others not finding such. Baboon studies show no mortality deceleration.
An analogous deceleration occurs in the failure rate of manufactured products; this analogy is elaborated in the reliability theory of aging and longevity.
Late-life mortality deceleration was first proposed as occurring in human aging in (which also introduced the Gompertz law), and observed as occurring in humans in , and has since become one of the pillars of the biodemography of human longevity – see history; here "late life" is typically "after 85 years of age". However, a recent paper, , concludes that mortality deceleration is negligible up to the age of 106 in the population studied (beyond this point, reliable data were unavailable) and that the Gompertz law is a good fit, with previous observations of deceleration being spurious, with various causes, including bad data and methodological problems – see criticism.
According to a 2018 paper, statistical errors are the main cause of apparent mortality deceleration in humans.
Phenomena
Three related terms are used in this context:
Late-life mortality deceleration
Hazard rate increasing at a decreasing rate (rather than increasing log-linearly as in the Gompertz law).
More strongly, hazard rate eventually stops increasing (or rather, asymptotes towards a limit), and then proceeds at a constant rate (or rather, approaches a constant rate), yielding (slightly sub-) exponential decay, as in radioactive decay.
This is used synonymously with "mortality leveling-off", or rather to refer to the region where hazard rate is approximately constant.
History
A brief historical review is given in ; a detailed survey is given in .
Late-life mortality deceleration was first proposed as occurring in human aging, in , which also introduced the Gompertz law. It was observed and quantified in , and reproduced in many later studies. Greenwood and Irwin wrote:
"the increase of mortality rate with age advances at a slackening rate, that nearly all, perhaps all, methods of graduation of the type of Gompertz's formula over-state senile mortality"
"the possibility that with advancing age the rate of mortality asymptotes to a finite value"
"the limiting values of qx [one-year probability of death] are 0.439 for women and 0.544 for men"
Following these studies, late-life mortality deceleration became one of the pillars of the theory of biodemography of human longevity, and models have incorporated it. It has been criticized at times, and recently has been very seriously criticized; see below.
Criticism
Statistical studies of extreme longevity are difficult for a number of factors. Firstly, because few people live to very old ages, a very large population is required for such studies, ideally all born and living in similar conditions (same country, same birth year). In small countries, a single birth year cohort is insufficiently numerous for statistics, and thus multiple years are often used. Secondly, due to the great ages, accurate records of persons living over 100 years require records dating from the late 19th or early 20th century, when such record-keeping was often not high-quality; further, there is a tendency to exaggerate one's age, which distorts data. Thirdly, granularity is an issue – ideally exact day of birth and death would be used; using only year of birth and death introduces granularity, which adds bias (as discussed below).
examined single birth-year cohorts from the United States Death Master File, using the method of extinct generations, and found that the effect disappeared if various distorting factors were removed. Specifically, they conclude that mortality deceleration is negligible up to the age of 106 in the population studied (beyond this point, reliable data were unavailable) and that the Gompertz law is a good fit, with previous observations of deceleration being spurious, with various causes, discussed below.
Why was mortality deceleration observed?
Given that mortality deceleration in humans had been observed in various studies, but disappeared on the careful analysis (of single-year cohorts in the US) in , it is natural to ask what causes this discrepancy – why was mortality deceleration observed?
propose several causes; notable, in each instance when such a factor is corrected or diminished, the fit with the Gompertz law becomes better.
Data quality:
Age exaggeration
There is at times a tendency for old people to exaggerate their ages; this reduces apparent mortality.
Technical:
Use of probability of death (over an discrete interval of time, usually one year), rather than the hazard rate (force of mortality; instantaneous rate)
For those of sufficient age, this appears to divide the death rate by their age, because as people age that one year becomes a smaller and smaller fraction of their life.
Probabilities of death can be reconstructed using the Sacher formula, if necessary.
Use of biased estimators of theoretical hazard rate
Since the Gompertz law is log-linear, working on a semi-log scale and taking the central difference quotient of the logarithm yields an unbiased and maximum likelihood estimator (assuming intervals and change in the hazard rate are small), but other methods, such as the "actuarial estimate", yield bias (especially due to concavity of an exponential function on a linear scale).
Methodology:
Mixing together several birth cohorts with different mortality levels
Using cross-sectional instead of cohort data
Crude assumptions, such as uniformity of death over a time interval
For example, if hazard rate is high, significantly more deaths should be expected in the first month of the period than the last month (since fewer people survive to the end); assuming that deaths occur evenly throughout the period or in the middle of the period overstates length of life.
Causes
Several causes are proposed for late-life mortality:
Population heterogeneity (first proposed in ), is by far the most common explanation of mortality deceleration. Even if individual hazard rate follows a Gompertz law, if the population is heterogeneous, the population hazard rate may exhibit late-life deceleration.
Redundancy exhaustion – in the reliability theory of aging, an organism's redundancy (reserves) are exhausted at extremely old ages, so every random hit results in death. In more detailed accounts, this only holds in relatively simple organisms – in more complex organisms with various subsystems, deceleration is less present or entirely absent.
Less risky behavior and more sheltered environment for older people.
Various evolutionary explanations.
Modeling
Late-life mortality deceleration can be modeled via modifications of the Gompertz law, using various logistic models.
Relevance
The rates of late-life mortality are important for pensions. For example, the mortality rates in late life (after age 85) are of particular interest for the baby boom generation, which will reach this age starting in 2030, and for pensions funding calculations.
Late-life mortality rates are of basic importance for understanding aging, both for organisms generally and for humans specifically.
Citations
References
Actuarial science
Senescence | Late-life mortality deceleration | [
"Chemistry",
"Mathematics",
"Biology"
] | 1,598 | [
"Applied mathematics",
"Senescence",
"Actuarial science",
"Cellular processes",
"Metabolism"
] |
34,854,733 | https://en.wikipedia.org/wiki/Phase%20conjugation | Phase conjugation is a physical transformation of a wave field where the resulting field has a reversed propagation direction but keeps its amplitudes and phases.
Description
It is distinguished from Time Reversal Signal Processing by the fact that phase conjugation uses a holographic or parametric pumping whereas time reversal records and re-emits the signal using transducers.
Holographic pumping makes the incident wave interact with a pump wave of the same frequency and records its amplitude-phase distribution. Then, a second pump wave reads the recorded signal and produces the conjugate wave. All those waves have the same frequency.
In parametric pumping, the parameters of the medium are modulated by the pump wave at double frequency. The interaction of this perturbation with the incident wave will produce the conjugate wave.
Both techniques allow an amplification of the conjugate wave compared to the incident wave.
As in time reversal, the wave re-emitted by a phase conjugation mirror will auto-compensate the phase distortion and auto-focus itself on its initial source, which can be a moving object.
Propagation of a time reversal replica demonstrates a remarkable property of phase-conjugated wave fields.
Phase conjugation of wave field means the inversion of linear momentum and angular momentum of light.
Phase conjugation methods exist in two main domains:
Acoustic phase conjugation
Optical phase conjugation
See also
Time Reversal Signal Processing
References
External links
Wave mechanics | Phase conjugation | [
"Physics"
] | 295 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
34,856,488 | https://en.wikipedia.org/wiki/Boundary%20friction | Boundary friction occurs when a surface is at least partially wet, but not so lubricated that there is no direct friction between two surfaces.
The Effect
When two consistent, unlubricated surfaces slide against each other, there is a specific, predictable amount of friction that occurs. This amount increases as velocity does, but only up to a certain point. That increase generally follows what is known as a Stribeck curve, after Richard Stribeck. On the other hand, if the two surfaces are completely lubricated, there is no direct friction or rubbing at all. In real life, though, there is often a situation where the surfaces are not completely dry, but also not so lubricated that they do not touch.
This "boundary friction" produces various effects, like an increase in lubrication through the generation of shearing forces, or an oscillation effect during motion, as the friction increases and decreases.
For example, one can experience vibration when trying to brake on a partially damp road, or a cold glass that is slowly condensing moisture can be lifted until it spontaneously slides across the surface it is resting on.
References
Friction | Boundary friction | [
"Physics",
"Chemistry"
] | 235 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Classical mechanics stubs",
"Classical mechanics",
"Surface science"
] |
34,858,148 | https://en.wikipedia.org/wiki/ChIP-exo | ChIP-exo is a chromatin immunoprecipitation based method for mapping the locations at which a protein of interest (transcription factor) binds to the genome. It is a modification of the ChIP-seq protocol, improving the resolution of binding sites from hundreds of base pairs to almost one base pair. It employs the use of exonucleases to degrade strands of the protein-bound DNA in the 5'-3' direction to within a small number of nucleotides of the protein binding site. The nucleotides of the exonuclease-treated ends are determined using some combination of DNA sequencing, microarrays, and PCR. These sequences are then mapped to the genome to identify the locations on the genome at which the protein binds.
Theory
Chromatin immunoprecipitation (ChIP) techniques have been in use since 1984 to detect protein-DNA interactions. There have been many variations on ChIP to improve the quality of results. One such improvement, ChIP-on-chip (ChIP-chip), combines ChIP with microarray technology. This technique has limited sensitivity and specificity, especially in vivo where microarrays are constrained by thousands of proteins present in the nuclear compartment, resulting in a high rate of false positives. Next came ChIP-sequencing (ChIP-seq), which combines ChIP with high-throughput sequencing. However, the heterogeneous nature of sheared DNA fragments maps binding sites to within ±300 base pairs, limiting specificity. Secondly, contaminating DNA presents a grave problem since so few genetic loci are cross-linked to the protein of interest, making any non-specific genomic DNA a significant source of background noise.
To address these problems, Rhee and Pugh revised the classic nuclease protection assay to develop ChIP-exo. This new ChIP technique relies on a lambda exonuclease that degrades only, and all, unbound double-stranded DNA in the 5′-3′ direction. Briefly, a protein of interest (engineering one with an epitope tag can be useful for immunoprecipitation) is crosslinked in vivo to its natural binding locations across a genome using formaldehyde.
Cells are then collected, broken open, and the chromatin sheared and solubilized by sonication. An antibody is then used to immunoprecipitate the protein of interest, along with the crosslinked DNA. DNA PCR adaptors are then ligated to the ends, which serve as a priming point for second strand DNA synthesis after the exonuclease digestion. Lambda exonuclease then digests double DNA strands from the 5′ end until digestion is blocked at the border of the protein-DNA covalent interaction. Most contaminating DNA is degraded by the addition of a second single-strand specific exonuclease. After the cross-linking is reversed, the primers to the PCR adaptors are extended to form double stranded DNA, and a second adaptor is ligated to 5′ ends to demarcate the precise location of exonuclease digestion cessation. The library is then amplified by PCR, and the products are identified by high throughput sequencing. This method allows for resolution of up to a single base pair for any protein binding site within any genome, which is a much higher resolution than either ChIP-chip or ChIP-seq.
Advantages
ChIP-exo has been shown to give up to single base pair resolution in identifying protein binding locations. This is in contrast to ChIP-seq which can locate a protein's binding site only to with ±300 base pairs.
Contamination of non-protein-bound DNA fragments can result in a high rate of false positives and negatives in ChIP experiments. The addition of exonucleases to the process not only improves resolution of binding-site calling, but removes contaminating DNA from the solution before sequencing.
Proteins that are inefficiently bound to a nucleotide fragment are more likely to be detected by ChIP-exo. This has allowed, for example, the recognition of more CTCF transcription factor binding sites than previously discovered.
Due to the higher resolution and reduced background, less depth of sequencing coverage is needed when using ChIP-exo.
Limitations
If a protein-DNA complex has multiple locations of cross-linking within a single binding event, then it can appear as though there are multiple distinct binding events. This likely results from these proteins being denatured and cross-linking at one of the available binding sites within the same event. The exonuclease would then stop at one of the bound sites, depending on which site the protein is cross-linked to.
As with any ChIP-based method, a suitable antibody for the protein of interest needs to be available in order to use this technique.
Applications
Rhee and Pugh introduce ChIP-exo by performing analyses on a small collection of transcription factors: Reb1, Gal4, Phd1, Rap1 in yeast and CTCF in human. Reb1 sites were often found in clusters and these clusters had ~10-fold higher occupancy than expected. Secondary sites in clusters were found ~40 bp from a primary binding site. Binding motifs of Gal4 showed a strong preference for three of the four nucleotides, suggesting a negative interaction between Gal4 and the excluded nucleotide. Phd1 recognizes three different motifs which explains previous reports of the ambiguity of Phd1's binding motif. Rap1 was found to recognize four motifs.
Ribosomal protein genes bound by this protein had a tendency to use a particular motif with a stronger consensus sequence. Other genes often used clusters of weaker consensus motifs, possibly to achieve a similar occupancy. Binding motifs of CTCF employed four "modules". Half of the bound CTCF sites used modules 1 and 2, while the rest used some combination of the four. It is believed that CTCF uses its zinc fingers to recognize different combinations of these modules.
Rhee and Pugh analyzed pre-initiation complex (PIC) structure and organization in Saccharomyces genomes. Using ChIP-exo, they were able to, among other discoveries, precisely identify TATA-like features in promoters reported to be TATA-less.
See also
Chromatin immunoprecipitation
ChIP-seq
ChIP-on-chip
Protein-DNA interaction
References
External links
DNA-protein interactions in high definition
Resolving transcription factor binding
High-resolution chromatin immunoprecipitation
Important Gene-Regulation Proteins Pinpointed by New Method
CexoR: An R/Bioconductor Package to Uncover High-resolution Protein-DNA Interactions in ChIP-exo Replicates
Peconic Genomics
Protein methods
Molecular biology
Molecular biology techniques
Genomics
DNA sequencing
Bioinformatics | ChIP-exo | [
"Chemistry",
"Engineering",
"Biology"
] | 1,429 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Bioinformatics",
"Molecular biology techniques",
"DNA sequencing",
"Molecular biology",
"Biochemistry"
] |
34,861,007 | https://en.wikipedia.org/wiki/Human%20Proteome%20Project | The Human Proteome Project (HPP) is a collaborative effort coordinated by the Human Proteome Organization. Its stated goal is to experimentally observe all of the proteins produced by the sequences translated from the human genome.
History
The Human Proteome Organization has served as a coordinating body for many long-running proteomics research projects associated with specific human tissues of clinical interest, such as blood plasma, liver, brain and urine. It has also been responsible for projects associated with specific technology and standards necessary for the large scale study of proteins.
The structure and goals of a larger project that would parallel the Human Genome Project has been debated in the scientific literature. The results of this debate and a series of meetings at the World Congresses of the Human Proteome Organization in 2009, 2010 and 2011 has been the decision to define the Human Proteome Project as being composed of two sub-projects, C-HPP and B/D-HPP. The C-HPP will be organized into 25 groups, one per human chromosome. The B/D-HPP will be organized into groups by the biological and disease relevance of proteins.
Projects and groups
The current set of working groups are listed below, in order of the chromosome to be studied.
Computational resources
Data reduction, analysis and validation of MS/MS based proteomics results is being provided by Eric Deutsch at the Institute for Systems Biology, Seattle, USA (PeptideAtlas). Data handling associated with antibody methods is being coordinated by Kalle von Feilitzen, Stockholm, Sweden (Human Protein Atlas). Overall integration and reporting informatics are the responsibility of Lydie Lane at SIB, Geneva, Switzerland (NeXtProt). All data generated as part of HPP contributions are deposited to one of the ProteomeXchange repositories.
Current status
Updates on the Human Proteome Project are regularly published, e.g. in the Journal of Proteome Research (2014). Metrics for the level of confidence associated with protein observations have been published as has been a "MissingProteinPedia".
Based on a comparison of nine major annotation portals gave a spread of human protein counts from 21,819 to 18,891 (as of 2017). The 2021 Metrics of the HPP show that protein expression has now been credibly detected 92.8% of the predicted proteins coded in the human genome.
See also
BioPlex
Human Protein Atlas - Protein databases
NeXtProt
PeptideAtlas
Human Proteome Folding Project
References
External links
HPP project page (www.hupo.org)
HPP web site (www.thehpp.org)
Chromosome-centric HPP web site (www.c-hpp.org)
BD HPP web site (www.hupo.org/B/D-HPP)
Proteomics
Human genome projects
Protein databases | Human Proteome Project | [
"Biology"
] | 595 | [
"Human genome projects",
"Genome projects"
] |
34,864,314 | https://en.wikipedia.org/wiki/Roderich%20Moessner | Roderich Moessner is a theoretical physicist at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany. His research interests are in condensed matter and materials physics, especially concerning new and topological forms of order, as well as the study of classical and quantum many-body dynamics in and out of equilibrium.
Life and career
Moessner studied physics Oxford University, where he was student of Neil Tanner's at Hertford College. At Oxford, he also received his doctorate in theoretical physics under the supervision of John Chalker. After three years as postdoc at Princeton University between 1998 and 2001, he joined the Centre National de la Recherche Scientifique in France, where he did research at the Laboratoire de Physique Théorique at the École normale supérieure, Paris, until 2006. After a faculty appointment at Somerville College and Theoretical Physics at Oxford University, he joined the Max Planck Institute for the Physics of Complex Systems in Dresden as director of the condensed matter division and Scientific Member of the Max Planck Society. Since 2008, he is also honorary professor at TU Dresden.
Research and publications
Moessner's research interests range widely in theoretical condensed matter physics. With Claudio Castelnovo and Shivaji L. Sondhi, Roderich Moessner is known for the theoretical proposition of realizing magnetic monopoles as emergent quasiparticles within a condensed matter system known as spin ice. Other notable results include the theoretical prediction of charge-density wave phases in quantum Hall physics, the identification and theory of a classical spin liquid on the pyrochlore lattice (both with J. T. Chalker); the theoretical discovery of the resonating valence bond liquid phase in the triangular lattice quantum dimer model (with S. L. Sondhi); and the proposal of a new type of spatiotemporal order, the πι-spin glass, now known as discrete time crystal (with V. Khemani, A. Lazarides and S. L. Sondhi), with experimental follow-up work on Google's Sycamore quantum computing platform. He has engaged extensively in experimental collaborations, e.g., on the dynamics of quantum spin liquids or the observation of magnetic monopoles in the material Dy2Ti2O7.
An overview of Roderich Moessner's research articles has been published on his webpage. Most are freely available in preprint form on the arxiv.
Furthermore, together with Joel E. Moore of the University of California, Berkeley, Moessner has published a book on "Topological Phases of Matter", a textbook for use of advanced undergraduates, graduate students, or active researchers. He has also co-edited the lecture notes on topological condensed matter physics of a Les Houches summer school 2014.
Scholarships, prizes, and distinctions
Honorary Fellow, Hertford College, Oxford, 2019
Physical Review E 25th Anniversary Milestone article of 2014 (declared in 2018).
Gottfried Wilhelm Leibniz Prize 2013 of the German Research Foundation (DFG), jointly with Achim Rosch, for their contributions to the physics of strongly interacting quantum systems
European Physical Society Condensed Matter Division Europhysics Prize 2012 shared with his theory collaborators Shivaji L. Sondhi, Claudio Castelnovo, as well as experimentalists Steven T. Bramwell, Santiago Grigera and Alan Tennant for the prediction and experimental observation of magnetic monopoles in spin ice
Fellow of the American Physical Society
Domus Senior Scholar, Merton College, Oxford University
Scott Prize of Oxford University, 1994, for the best final examination in physics at the University of Oxford.
Scholar of the German National Scholarship Foundation (Studienstiftung)
Community service
Member of editorial board, Physik Journal
Member of the executive board of the German Physical Society (DPG)
Member of the Board council of the German Physical Society
Divisional Associate Editor of the Physical Review Letters
Board member of cluster of excellence ct.qmat
Co-spokesperson of Helmholtz Virtual Institute "New states of matter and their excitations"
Popular culture
Magnetic monopoles in spin ice featured in an episode of The Big Bang Theory not long after the theoretical proposal, while time crystals appeared in an episode of Star Trek: Discovery.
References
External links
CV of Roderich Moessner, MPI-PKS website
Les Houches School of Physics homepage
Home | ct.qmat
Welcome to the Max Planck Institute for the Physics of Complex Systems
Living people
Condensed matter physicists
Year of birth missing (living people)
Alumni of the University of Oxford
Academics of the University of Oxford
Max Planck Institute directors
Princeton University people
French National Centre for Scientific Research scientists
Fellows of the American Physical Society | Roderich Moessner | [
"Physics",
"Materials_science"
] | 960 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
34,865,455 | https://en.wikipedia.org/wiki/Coxeter%20complex | In mathematics, the Coxeter complex, named after H. S. M. Coxeter, is a geometrical structure (a simplicial complex) associated to a Coxeter group. Coxeter complexes are the basic objects that allow the construction of buildings; they form the apartments of a building.
Construction
The canonical linear representation
The first ingredient in the construction of the Coxeter complex associated to a Coxeter system is a certain representation of , called the canonical representation of .
Let be a Coxeter system with Coxeter matrix . The canonical representation is given by a vector space with basis of formal symbols , which is equipped with the symmetric bilinear form . In particular, . The action of on is then given by .
This representation has several foundational properties in the theory of Coxeter groups; for instance, is positive definite if and only if is finite. It is a faithful representation of .
Chambers and the Tits cone
This representation describes as a reflection group, with the caveat that might not be positive definite. It becomes important then to distinguish the representation from its dual . The vectors lie in and have corresponding dual vectors in given by
where the angled brackets indicate the natural pairing between and .
Now acts on and the action is given by
for and any . Then is a reflection in the hyperplane . One has the fundamental chamber ; this has faces the so-called walls, . The other chambers can be obtained from by translation: they are the for .
The Tits cone is . This need not be the whole of . Of major importance is the fact that is convex. The closure of is a fundamental domain for the action of on .
The Coxeter complex
The Coxeter complex of with respect to is
, where is the multiplicative group of positive reals.
Examples
Finite dihedral groups
The dihedral groups (of order 2n) are Coxeter groups, of corresponding type . These have the presentation .
The canonical linear representation of is the usual reflection representation of the dihedral group, as acting on an -gon in the plane (so in this case). For instance, in the case we get the Coxeter group of type , acting on an equilateral triangle in the plane. Each reflection has an associated hyperplane in the dual vector space (which can be canonically identified with the vector space itself using the bilinear form , which is an inner product in this case as remarked above); these are the walls. They cut out chambers, as seen below:
The Coxeter complex is then the corresponding -gon, as in the image above. This is a simplicial complex of dimension 1, and it can be colored by cotype.
The infinite dihedral group
Another motivating example is the infinite dihedral group . This can be seen as the group of symmetries of the real line that preserves the set of points with integer coordinates; it is generated by the reflections in and . This group has the Coxeter presentation .
In this case, it is no longer possible to identify with its dual space , as is degenerate. It is then better to work solely with , which is where the hyperplanes are defined. This then gives the following picture:
In this case, the Tits cone is not the whole plane, but only the upper half plane. Taking the quotient by the positive reals then yields another copy of the real line, with marked points at the integers. This is the Coxeter complex of the infinite dihedral group.
Alternative construction of the Coxeter complex
Another description of the Coxeter complex uses standard cosets of the Coxeter group . A standard coset is a coset of the form , where for some proper subset of . For instance, and .
The Coxeter complex is then the poset of standard cosets, ordered by reverse inclusion. This has a canonical structure of a simplicial complex, as do all posets that satisfy:
Any two elements have a greatest lower bound.
The poset of elements less than or equal to any given element is isomorphic to the poset of subsets of for some integer n.
Properties
The Coxeter complex associated to has dimension . It is homeomorphic to a -sphere if W is finite and is contractible if W is infinite.
Every apartment of a spherical Tits building is a Coxeter complex.
See also
Buildings
Weyl group
Root system
References
Sources
Peter Abramenko and Kenneth S. Brown, Buildings, Theory and Applications. Springer, 2008.
Group theory
Algebraic combinatorics
Geometric group theory
Mathematical structures | Coxeter complex | [
"Physics",
"Mathematics"
] | 920 | [
"Mathematical structures",
"Geometric group theory",
"Group actions",
"Mathematical objects",
"Combinatorics",
"Group theory",
"Fields of abstract algebra",
"Algebraic combinatorics",
"Symmetry"
] |
26,086,853 | https://en.wikipedia.org/wiki/P-adic%20Hodge%20theory | In mathematics, p-adic Hodge theory is a theory that provides a way to classify and study p-adic Galois representations of characteristic 0 local fields with residual characteristic p (such as Qp). The theory has its beginnings in Jean-Pierre Serre and John Tate's study of Tate modules of abelian varieties and the notion of Hodge–Tate representation. Hodge–Tate representations are related to certain decompositions of p-adic cohomology theories analogous to the Hodge decomposition, hence the name p-adic Hodge theory. Further developments were inspired by properties of p-adic Galois representations arising from the étale cohomology of varieties. Jean-Marc Fontaine introduced many of the basic concepts of the field.
General classification of p-adic representations
Let be a local field with residue field of characteristic . In this article, a -adic representation of (or of , the absolute Galois group of ) will be a continuous representation , where is a finite-dimensional vector space over . The collection of all -adic representations of form an abelian category denoted in this article. -adic Hodge theory provides subcollections of -adic representations based on how nice they are, and also provides faithful functors to categories of linear algebraic objects that are easier to study. The basic classification is as follows:
where each collection is a full subcategory properly contained in the next. In order, these are the categories of crystalline representations, semistable representations, de Rham representations, Hodge–Tate representations, and all p-adic representations. In addition, two other categories of representations can be introduced, the potentially crystalline representations and the potentially semistable representations . The latter strictly contains the former which in turn generally strictly contains ; additionally, generally strictly contains , and is contained in (with equality when the residue field of is finite, a statement called the p-adic monodromy theorem).
Period rings and comparison isomorphisms in arithmetic geometry
The general strategy of p-adic Hodge theory, introduced by Fontaine, is to construct certain so-called period rings such as BdR, Bst, Bcris, and BHT which have both an action by GK and some linear algebraic structure and to consider so-called Dieudonné modules
(where B is a period ring, and V is a p-adic representation) which no longer have a GK-action, but are endowed with linear algebraic structures inherited from the ring B. In particular, they are vector spaces over the fixed field . This construction fits into the formalism of B-admissible representations introduced by Fontaine. For a period ring like the aforementioned ones B∗ (for ∗ = HT, dR, st, cris), the category of p-adic representations Rep∗(K) mentioned above is the category of B∗-admissible ones, i.e. those p-adic representations V for which
or, equivalently, the comparison morphism
is an isomorphism.
This formalism (and the name period ring) grew out of a few results and conjectures regarding comparison isomorphisms in arithmetic and complex geometry:
If X is a proper smooth scheme over C, there is a classical comparison isomorphism between the algebraic de Rham cohomology of X over C and the singular cohomology of X(C)
This isomorphism can be obtained by considering a pairing obtained by integrating differential forms in the algebraic de Rham cohomology over cycles in the singular cohomology. The result of such an integration is called a period and is generally a complex number. This explains why the singular cohomology must be tensored to C, and from this point of view, C can be said to contain all the periods necessary to compare algebraic de Rham cohomology with singular cohomology, and could hence be called a period ring in this situation.
In the mid sixties, Tate conjectured that a similar isomorphism should hold for proper smooth schemes X over K between algebraic de Rham cohomology and p-adic étale cohomology (the Hodge–Tate conjecture, also called CHT). Specifically, let CK be the completion of an algebraic closure of K, let CK(i) denote CK where the action of GK is via g·z = χ(g)ig·z (where χ is the p-adic cyclotomic character, and i is an integer), and let . Then there is a functorial isomorphism
of graded vector spaces with GK-action (the de Rham cohomology is equipped with the Hodge filtration, and is its associated graded). This conjecture was proved by Gerd Faltings in the late eighties after partial results by several other mathematicians (including Tate himself).
For an abelian variety X with good reduction over a p-adic field K, Alexander Grothendieck reformulated a theorem of Tate's to say that the crystalline cohomology H1(X/W(k)) ⊗ Qp of the special fiber (with the Frobenius endomorphism on this group and the Hodge filtration on this group tensored with K) and the p-adic étale cohomology H1(X,Qp) (with the action of the Galois group of K) contained the same information. Both are equivalent to the p-divisible group associated to X, up to isogeny. Grothendieck conjectured that there should be a way to go directly from p-adic étale cohomology to crystalline cohomology (and back), for all varieties with good reduction over p-adic fields. This suggested relation became known as the mysterious functor.
To improve the Hodge–Tate conjecture to one involving the de Rham cohomology (not just its associated graded), Fontaine constructed a filtered ring BdR whose associated graded is BHT and conjectured the following (called CdR) for any smooth proper scheme X over K
as filtered vector spaces with GK-action. In this way, BdR could be said to contain all (p-adic) periods required to compare algebraic de Rham cohomology with p-adic étale cohomology, just as the complex numbers above were used with the comparison with singular cohomology. This is where BdR obtains its name of ring of p-adic periods.
Similarly, to formulate a conjecture explaining Grothendieck's mysterious functor, Fontaine introduced a ring Bcris with GK-action, a "Frobenius" φ, and a filtration after extending scalars from K0 to K. He conjectured the following (called Ccris) for any smooth proper scheme X over K with good reduction
as vector spaces with φ-action, GK-action, and filtration after extending scalars to K (here is given its structure as a K0-vector space with φ-action given by its comparison with crystalline cohomology). Both the CdR and the Ccris conjectures were proved by Faltings.
Upon comparing these two conjectures with the notion of B∗-admissible representations above, it is seen that if X is a proper smooth scheme over K (with good reduction) and V is the p-adic Galois representation obtained as is its ith p-adic étale cohomology group, then
In other words, the Dieudonné modules should be thought of as giving the other cohomologies related to V.
In the late eighties, Fontaine and Uwe Jannsen formulated another comparison isomorphism conjecture, Cst, this time allowing X to have semi-stable reduction. Fontaine constructed a ring Bst with GK-action, a "Frobenius" φ, a filtration after extending scalars from K0 to K (and fixing an extension of the p-adic logarithm), and a "monodromy operator" N. When X has semi-stable reduction, the de Rham cohomology can be equipped with the φ-action and a monodromy operator by its comparison with the log-crystalline cohomology first introduced by Osamu Hyodo. The conjecture then states that
as vector spaces with φ-action, GK-action, filtration after extending scalars to K, and monodromy operator N. This conjecture was proved in the late nineties by Takeshi Tsuji.
Notes
See also
Hodge theory
Arakelov theory
Hodge-Arakelov theory
p-adic Teichmüller theory
References
Primary sources
Secondary sources
Algebraic number theory
Galois theory
Representation theory of groups
Hodge theory
Arithmetic geometry | P-adic Hodge theory | [
"Mathematics",
"Engineering"
] | 1,793 | [
"Tensors",
"Hodge theory",
"Arithmetic geometry",
"Differential forms",
"Algebraic number theory",
"Number theory"
] |
26,090,437 | https://en.wikipedia.org/wiki/C9H11N5O3 | {{DISPLAYTITLE:C9H11N5O3}}
The molecular formula C9H11N5O3 (molar mass: 237.21 g/mol, exact mass: 237.0862 u) may refer to:
Biopterin
Dyspropterin
Sepiapterin
Molecular formulas | C9H11N5O3 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,955,101 | https://en.wikipedia.org/wiki/Bloch%E2%80%93Gr%C3%BCneisen%20temperature | For typical three-dimensional metals, the temperature-dependence of the electrical resistivity ρ(T) due to the scattering of electrons by acoustic phonons changes from a high-temperature regime in which ρ ∝ T to a low-temperature regime in which ρ ∝ T5 at a characteristic temperature known as the Debye temperature. For low density electron systems, however, the Fermi surface can be substantially smaller than the size of the Brillouin zone, and only a small fraction of acoustic phonons can scatter off electrons. This results in a new characteristic temperature known as the Bloch–Grüneisen temperature that is lower than the Debye temperature. The Bloch–Grüneisen temperature is defined as 2ħvskF/kB, where ħ is the Planck constant, vs is the velocity of sound, ħkF is the Fermi momentum, and kB is the Boltzmann constant.
When the temperature is lower than the Bloch–Grüneisen temperature, the most energetic thermal phonons have a typical momentum of kBT/vs which is smaller than ħkF, the momentum of the conducting electrons at the Fermi surface. This means that the electrons will only scatter in small angles when they absorb or emit a phonon. In contrast when the temperature is higher than the Bloch–Grüneisen temperature, there are thermal phonons of all momenta and in this case electrons will also experience large angle scattering events when they absorb or emit a phonon. In many cases, the Bloch–Grüneisen temperature is approximately equal to the Debye temperature (usually written ), which is used in modeling specific heat capacity. However, in particular circumstances these temperatures can be quite different.
The theory was initially put forward by Felix Bloch and Eduard Grüneisen. The Bloch–Grüneisen temperature has been observed experimentally in a two-dimensional electron gas and in graphene.
Mathematically, the Bloch–Grüneisen model produces a resistivity given by:
.
Here, is a characteristic temperature (typically matching well with the Debye temperature). Under Bloch's original assumptions for simple metals, . For , this can be approximated as dependence. In contrast, the so called Bloch–Wilson limit, where works better for s-d inter-band scattering, such as with transition metals. The second limit gives at low temperatures. In practice, which model is more applicable depends on the particular material.
References
Scattering
Mesoscopic physics
Nanoelectronics
Electrical resistance and conductance
Temperature | Bloch–Grüneisen temperature | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 529 | [
"Scalar physical quantities",
"Thermodynamic properties",
"Temperature",
"Physical quantities",
"Nuclear physics",
"SI base quantities",
"Intensive quantities",
"Quantity",
"Quantum mechanics",
"Scattering",
"Condensed matter physics",
"Thermodynamics",
"Particle physics",
"Nanoelectronics... |
35,962,051 | https://en.wikipedia.org/wiki/Ryder%20Scott | Ryder Scott Company is a petroleum consulting firm based in Houston, Texas, United States. The firm independently estimates oil and gas reserves, future production profiles and cashflow economics, including discounted net present values. It assess oil reserves and evaluates oil and gas properties.
History
The company was founded in Bradford, Pennsylvania on July 1, 1937, by Harry M. Ryder, a prominent petroleum engineer, who died in 1954. and David Scott, Jr.
In 1967, Ryder Scott acquired Robert W. Harrison & Co., moved to Houston, and transitioned from waterflood design to evaluation engineering, Ryder Scott's core business.
Software
Reservoir Solutions Freeware
SOS Downloads
Well Collator
See also
GaffneyCline
References
External links
Companies based in Houston
Petroleum industry
Consulting firms of the United States
Energy consultancies | Ryder Scott | [
"Chemistry"
] | 166 | [
"Petroleum industry",
"Petroleum",
"Chemical process engineering"
] |
35,962,306 | https://en.wikipedia.org/wiki/Pekka%20Pyykk%C3%B6 | Veli Pekka Pyykkö (born 12 October 1941) is a Finnish academic. He was professor of Chemistry at the University of Helsinki. From 2009–2012, he was the chairman of the International Academy of Quantum Molecular Science. He is known for his extension to the periodic table of elements, known as the Pyykkö model.
Pyykkö has also studied the relativistic effects present in heavy atoms and their effects in NMR.
Pyykkö model
After the 118 elements now known, Pekka Pyykkö predicts that the orbital shells will fill up in this order:
8s,
5g,
the first two spaces of 8p,
6f,
7d,
9s,
the first two spaces of 9p,
the rest of 8p.
He also suggests that period 8 be split into three parts:
8a, containing 8s,
8b, containing the first two elements of 8p,
8c, containing 7d and the rest of 8p.
The compact version:
Pekka Pyykkö correctly predicted the existence of chemical bonds between gold and the noble gas xenon, which is usually inert; this bond is known to occur in the cationic complex tetraxenonogold(II) (). He also correctly predicted the existence of gold–carbon triple bonds.
References
1941 births
Living people
People involved with the periodic table
Academic staff of the University of Helsinki
Finnish chemists
Schrödinger Medal recipients
Computational chemists | Pekka Pyykkö | [
"Chemistry"
] | 308 | [
"Periodic table",
"People involved with the periodic table",
"Computational chemists",
"Computational chemistry",
"Theoretical chemists"
] |
27,965,042 | https://en.wikipedia.org/wiki/Biologics%20license%20application | A biologics license application (BLA) is defined by the U.S. Food and Drug Administration (FDA) as follows:
The biologics license application is a request for permission to introduce, or deliver for introduction, a biologic product into interstate commerce (21 CFR 601.2). The BLA is regulated under 21 CFR 600 – 680. A BLA is submitted by any legal person or entity who is engaged in manufacture or an applicant for a license who takes responsibility for compliance with product and establishment standards. Form 356h specifies the requirements for a BLA. This includes:
Applicant information
Product/manufacturing information
Pre-clinical studies
Clinical studies
Labeling
Some biological products are regulated by the Center for Drug Evaluation and Research (CDER) while others are regulated by the Center for Biologics Evaluation and Research (CBER).
A BLA is submitted after the investigational new drug (IND) phase, once the clinical investigations are completed. If the Form 356h is missing information, the FDA will reply within 74 days.
A BLA asserts that the product is "safe, pure, and potent", the manufacturing facilities are inspectable, and each package of the product bears the license number. Statutory standards for BLA approval are largely the same as those for New Drug Application approval. According to , FDA interprets "potency" to include effectiveness of the biologic.
After approval, annual reports, reports on adverse events, manufacturing changes, and labeling changes must be submitted.
See also
New drug application
Investigational new drug
References
Food and Drug Administration
Intellectual property law
Drug development
American medical research
Drug safety
Experimental drugs
United States federal health legislation
Biotechnology products | Biologics license application | [
"Chemistry",
"Biology"
] | 340 | [
"Biotechnology products",
"Drug safety"
] |
27,967,673 | https://en.wikipedia.org/wiki/Nuclear%20resonance%20fluorescence | Nuclear resonance fluorescence (NRF) is a nuclear process in which a nucleus absorbs and emits high-energy photons called gamma rays. NRF interactions typically take place above 1 MeV, and most NRF experiments target heavy nuclei such as uranium and thorium
This process is used for scanning cargo for contraband. Its far more effective than just using x-rays because x-rays can only see the shape of the item in question. With nuclear resonance fluorescence its possible to see what the molecular structure is and thus, distinguish between salt and cocaine without even opening the container. (from National Geographic Magazine, February 2018, article: They Are Watching Us, by Robert Draper)
Mode of interaction
NRF reactions are the result of nuclear absorption and subsequent emission of high-energy photons (gamma rays). As a gamma ray strikes the nucleus, the nucleus becomes excited (that is, the nuclear system as a quantum mechanical ensemble is put into a state with a higher energy). Much like electronic excitation, the nucleus will decay toward its ground state, releasing a high-energy photon at a number of possible, discrete energies. Thus, NRF can be quantified using spectroscopy. Nuclei can be identified by the distinct pattern of NRF emission peaks, although NRF analysis is much less straightforward than typical electronic emissions.
As the energy of incident photons increases, the average spacing between nuclear energy levels decreases. For sufficiently energetic nuclei (i.e. incident photons of over ~1 MeV), the mean spacing between energy levels may be lower than the mean width of each NRF resonance. At this point, determinations of peak spacing cannot be analytical, and must rely on specialized applications of the statistical methods of signal processing.
There is a related phenomenon at the level of electron orbitals. A photon, generally in a lower energy range, can be absorbed by displacing an orbital electron, and then a new photon having the same energy is emitted in a random direction when the electron drops back down. See resonance fluorescence for a discussion of the theory and x-ray fluorescence for a discussion of its many applications.
References
Fluorescence
Nuclear physics | Nuclear resonance fluorescence | [
"Physics",
"Chemistry"
] | 445 | [
"Luminescence",
"Fluorescence",
"Nuclear physics"
] |
27,969,423 | https://en.wikipedia.org/wiki/Benzylic%20activation%20in%20tricarbonyl%28arene%29chromium%20complexes | Benzylic activation in tricarbonyl(arene)chromium complexes refers to the reactions at the benzylic position of aromatic rings complexed to chromium(0). Complexation of an aromatic ring to chromium stabilizes both anions and cations at the benzylic position and provides a steric blocking element for diastereoselective functionalization of the benzylic position. A large number of stereoselective methods for benzylic and homobenzylic functionalization have been developed based on this property.
Tricarbonyl(arene)chromium complexes
Tricarbonyl(arene)chromium complexes of the type (arene)Cr(CO)3 are readily prepared by heating a solution of chromium hexacarbonyl with arenes, especially electron rich derivatives. The chromium(0) activates the side chain of the arene, facilitating dissociation of a benzylic proton, leaving group, or nucleophilic addition to the homobenzylic position of styrenes. Further transformations of the resulting conformationally restricted, benzylic anion or cation involve the approach of reagents exo to the chromium fragment. Thus, benzylic functionalization reactions of planar chiral chromium arene complexes are highly diastereoselective. Additionally, the chromium tri(carbonyl) fragment can be used as a blocking element in addition reactions to ortho-substituted aromatic aldehydes and alkenes. An ortho substituent is necessary in these reactions to restrict conformations available to the aldehyde or alkene. Removal of the chromium fragment to afford the metal-free functionalized aromatic compound is possible photolytically or with an oxidant.
Planar chiral chromium complexes
Enantiopure, planar chiral chromium arene complexes can be synthesized using several strategies. Diastereoselective complexation of a chiral, non-racemic arene to chromium is one such strategy. In the followingexample, enantioselective Corey-Itsuno reduction sets up a diastereoselective ligand substitution reaction. After complexation, the alcohol is reduced with triethylsilane.
A second strategy involves enantioselective ortho-lithiation and in situ quenching with an electrophile. Isolation of the lithium arene and subsequent treatment with TMSCl led to lower enantioselectivities.
Site-selective conjugate addition to chiral aryl hydrazone complexes can also be used for the enantioselective formation of planar chiral chromium arenes. Hydride abstraction neutralizes the addition product, and treatment with acid cleaves the hydrazone.
Benzylic functionalization reactions
ortho-Substituted aryl aldehyde complexes undergo diastereoselective nucleophilic addition with organometallic reagents and other nucleophiles. The following equation illustrates a diastereoselective Morita-Baylis-Hillman reaction
Pinacol coupling and the corresponding diamine coupling are possible in the presence of a one-electron reducing agent such as samarium(II) iodide.
Benzylic cations of chromium arene complexes are conformationally stable, and undergo only exo attack to afford SN1 products stereospecifically, with retention of configuration. Propargyl and oxonium cations undergo retentive substitution reactions, and even β carbocations react with a significant degree of retention.
Benzylic anions of chromium arene complexes exhibit similar reactivity to cations. They are also conformationally restricted and undergo substitution reactions with retention of stereochemistry at the benzylic carbon. In the example below, complexation of the pyridine nitrogen to lithium is essential for high stereoselectivity.
Nucleophilic addition to styrenes followed by quenching with an electrophile leads to cis products with essentially complete stereoselectivity.<
(12)
Diastereoselective reduction of styrenes is possible with samarium(II) iodide. A distant alkene is untouched during this reaction, which provides the reduced alkylarene product in high yield.
Related reactions
Complexation of a haloarene to chromium increases its propensity to undergo oxidative addition. Suzuki cross coupling of a planar chiral chromium haloarene complex with an aryl boronic acid is thus a viable method for the synthesis of axially chiral biaryls. In the example below, the syn isomer is formed in preference to the anti isomer; when R2 is the formyl group, the selectivity reverses.
Tetralones complexed to chromium may be deprotonated without side reactions. Alkylation of the resulting enolate proceeds with complete diastereoselectivity to afford the exo product.
References
Organic reactions | Benzylic activation in tricarbonyl(arene)chromium complexes | [
"Chemistry"
] | 1,064 | [
"Organic reactions"
] |
27,970,912 | https://en.wikipedia.org/wiki/Matching%20preclusion | In graph theory, a branch of mathematics, the matching preclusion number of a graph G (denoted mp(G)) is the minimum number of edges whose deletion results in the elimination of all perfect matchings or near-perfect matchings (matchings that cover all but one vertex in a graph with an odd number of vertices). Matching preclusion measures the robustness of a graph as a communications network topology for distributed algorithms that require each node of the distributed system to be matched with a neighboring partner node.
In many graphs, mp(G) is equal to the minimum degree of any vertex in the graph, because deleting all edges incident to a single vertex prevents that vertex from being matched. This set of edges is called a trivial matching preclusion set. A variant definition, the conditional matching preclusion number, asks for the minimum number of edges the deletion of which results in a graph that has neither a perfect or near-perfect matching nor any isolated vertices.
It is NP-complete to test whether the matching preclusion number of a given graph is below a given threshold.
The strong matching preclusion number (or simply, SMP number) is a generalization of the matching preclusion number; the SMP number of a graph G, smp(G) is the minimum number of vertices and/or edges whose deletion results in a graph that has neither perfect matchings nor almost-perfect matchings.
Other numbers defined in a similar way by edge deletion in an undirected graph include the edge connectivity, the minimum number of edges to delete in order to disconnect the graph, and the cyclomatic number, the minimum number of edges to delete in order to eliminate all cycles.
References
Graph invariants
Matching (graph theory) | Matching preclusion | [
"Mathematics"
] | 374 | [
"Graph theory stubs",
"Graph theory",
"Graph invariants",
"Mathematical relations",
"Matching (graph theory)"
] |
27,972,374 | https://en.wikipedia.org/wiki/Architecture%20of%20Baku | The architecture of Baku is not characterized by any particular architectural style, having accumulated its buildings over a long period of time.
In itself, Baku contains a wide variety of styles, progressing through Masud Ibn Davud's 12th century Maiden Tower and the educational institutions and buildings of the Russian Imperial era.
Late Modern and Postmodern architecture began to appear in the early-2000s. With the economic development, old buildings such as Atlant House have been razed to make way for new ones. Buildings with all glass shell appear around the city, with the most prominent examples being the SOCAR Tower and Flame Towers.
Several monuments pay homage to people and events in the city. The Martyrs' Lane provides views of the surrounding area whilst commemorating the victims of Black January and Nagorno-Karabakh conflict.
Islamic architecture
With Shi'a Islam being the dominant religion of Azerbaijan, there are may Islamic architecture featured buildings that resides in Baku. Religious places have more Islamic calligraphy drawn on the columns and other places on the structure. In December 2000, the Old City of Baku, including the Palace of the Shirvanshahs and Maiden Tower, became the first location in Azerbaijan to be classified as a World Heritage Site by UNESCO.
Islamic buildings continued to be constructed in Baku during the Imperial period. In particular, the Ajdarbey Mosque in then outskirts of the city was built in 1912–1913.
Imperial Russian and the Azerbaijan Democratic Republic era
Urban development and construction
With the boom of the oil industry in Baku came an influx of both foreign western cash and ideas. Eclectic architecture fusing not only east and west, but several western styles as well became prevalent in the architecture found in the city outside the medieval walls. Local oil industrialists had the opportunity to travel, particularly to Europe, where they came back with ideas of the European architectural styles, and had both the desire and the capital to recreate them. Two industrialized districts would be created to the east of the original medieval city and Russian garrison, the denser and older Black City and the newer, sprawling White City.
The Black City was the first example of a planned industrial district in the Russian empire, it would be separated from the original residential and commercial zones by a two-kilometer buffer zone. A dense 80 sq meter block grid would be created, designated for a flexible factory-based use. Contemporaries would comment on how dirty this district was, with the black oil smoke that filled the air giving the area its name.
As Baku grew industrially, the White City would be developed to house the growing industry. It was characterized by its lack of a block structure, instead opting to have larger blocks (about 500x300 meters on average), irregular in size, which would accommodate to the shape taken by the factories rather than the other way around. The White City would grow to mainly house only select new refineries, which were cleaner than those used in the Black City, and would also be home to some of the workers housing developments created by the owners of the factories and refineries.
What had started as an oil boom in Baku soon turned to a construction one with the quick and massive influx of capital to the city. The city's population grew rapidly, at a rate faster than contemporary New York. The foreign population started to exceed that of the local Azeri's, and with it came western influence in construction. Due to the intensity and rapidness of development, the city was developed both vertically as well as horizontally, with most new construction boasting large foundations meant to have more levels added to it with the next influx of capital. Most of the construction was made using the local limestone quarried near the city, and the first few layers of development tended to be of vaulted masonry, meant to be structurally strong enough to develop additional stories on top later on. It was an architecture characteristic of that of an oil boomtown, one that was meant to be adapted and added to with the next boom. A side product of this rapid development, however, was un regulation in proper city planning, something complained about by contemporaries. There was a lack of proper street planning, lighting planning, transportation systems, and sanitary arrangements.
In a second cycle of construction, oil industrialists who had made their fortunes in the 1870s and 80s would develop the area between the medieval walled city and the Black City in the 1890s and early 1900s, creating the metropolitan Baku that would be nicknamed the "Paris of the Caspian." They would model the area after the great European cities of the time, with wide canopied boulevards, a seaside esplanade, monumental civic buildings, and all the new technologies in communication and transportation. The oil barons competed with each other to donate the most lavish and monumental civic buildings, but the initial construction was spearheaded by Haji Zeynalabdin Taghiyev (1823?-1924), one of the most philanthropic of the industrialists. The first Azerbaijani National Theater was founded in 1873, as well as another theater built in 1882. Parks and educational centers such as vocational schools were given great importance during this time, including Baku's first school for Muslim girls in 1910, designed by Josef Goslavsky, who was then the Chief Architect of Baku. Soon more of the wealthy industrialists followed and competed in a philanthropic battle of donating towards the development of the city, such as Musa Naghiyev and Shamsi Asadullaev. Many of the hallmarks of a thriving cosmopolitan city were constructed during this time. The Baku City Duma was built from 1900-1904, also designed by Goslavsky in an Italianate renaissance style on the northern edge of the medieval walled city.
Construction of buildings in Baku remained largely using the limestone available locally, with other materials easily brought down the Volga and through the city port. Unlike their European and Russian counterparts, however, they were not covered in stucco because of the local climate. Instead, the limestone was intricately carved, and thus used in creating ornamentation of the facade.
Oil baron mansions
As well as competing between each other in philanthropic purposes, the oil industrialists of the 1880s, 90s, and early 1900s would compete with each other to build the most lavish mansions in the new residential quarters they created. They imported architects as well as style preferences from their travels to Europe, and sought to emulate the grand urban palaces they saw for themselves in Baku. These mansions would become emblematic of the distinct architectural style of pre-Soviet Baku, a fusion of east and western styles in the eclectic style which was popular in the period.
It started as an importing of purely Western styles, in some cases an almost exact copy, created from modified plans of a European palace. Such is the former residence of Murtuza Mukhtarov, built for his wife after she liked a French gothic palace they visited. Mukhtarov would obtain the plans, hired the polish architect I. K. Ploshko to modify the plans, and built in 1911-1912. After invasion by the Red Army it was converted to a "wedding palace," a purpose to which it still serves today.
The Taghiyev residence (1895-1902) is another example of the western style in the architecture, designed by the polish architect Goslavsky in the Italianate renaissance style he was known for. It is known for its heavily decorated interior, with a gilded main gallery on the second floor. It was richly decorated with a mixture of Art Nouveau ornamentation and furniture. It was converted to the National Museum of History of Azerbaijan under the soviets, and the limestone chiseled "T" for Taghiyev is still visible in the facade after a Soviet attempt to remove it. As the mixing of western styles with eastern elements continued, architects from places ranging between Germany, Russia, and Poland would design not only variations of eclectic mixes between Gothic and revival styles, but also eccentric mixes such as a three-story mansion shaped like a dragon, a house in the shape of a house of cards, and another supposedly covered in gold leaf.
Gallery
Soviet period
USSR Council of Ministers' resolution "On measures to further industrialization, improving quality and reducing the cost of construction" and "The removal of excess in the design and construction" in the mid-1950s has helped to initiate mass housing in Baku.
The architectural image of the country's capital was enriched by a number of interesting in conception projects and highly significant in terms of urban sites, such as the building of the historical Ismailiyya Palace, which nowadays is the office of the Presidium of National Academy of Sciences of Azerbaijan, the Lenin Palace (now the Heydar Aliyev Palace), as well as marine and railway stations.
Post-Soviet and present day
Baku’s new business districts today has shifted around the Baku city center, with many high-tech buildings and postmodern architecture. Aside from buildings used for business and institutions, various new residential developments are currently underway, many of which consist of high-rise buildings with a glass exterior, surrounded by American-style residential communities.
List of architects in Baku
The architects of Baku have influenced the city's architecture throughout its development during the 19th and 20th centuries.
History
The names of numerous medieval architects of Baku are depicted on their buildings. One can mention the names of Masud ibn Davud, who designed Maiden Tower, his son Abdul-Majid Masud oglu, the author of the project of Sabayil Castle and Round Castle in Mardakan, Mahmud ibn Sa'd, who built Bibi-Heybat Mosque, Nardaran Fortress and Molla Ahmad Mosque in Baku's Old City, etc.
Due to the oil boom in the 19th century, Baku became a rapidly developing city and grew rapidly. The large-scale construction of the city was directly tied to the increase of the city's population. Eventually, this brought numerous Armenian, Azerbaijani, German (Adolf Eichler and Nicolaus von der Nonne), Polish (Józef Gosławski and Józef Płoszko) and Russian architects to the city, who ultimately influenced the city's architectural profile. Much of these architects were educated in Russia and, in particular, in St. Petersburg, Russia's capital city of the time. These included a number of high-profile designers, such as Freidun Aghalyan, Zivar bey Ahmadbeyov, Nikolai Bayev, Mammad Hasan Hajinski, and Hovhannes Katchaznouni. From 1860 till 1868, Gasim bey Hajibababeyov was considered the chief architect of Baku.
Architects during the Soviet period include Mikayil Huseynov, Sadig Dadashov, Lev Ilyin, Lev Rudnev etc. In Baku also worked architects Hasan Majidov, who designed the building of Museum center, Talaat Khanlarov, the author of Heydar Aliyev Sports and Exhibition Complex, Anvar Qasimzade, who designed the building of the Oil and Gas Research and Design Institute (1956) and Ulduz metro station.
Architects
Gallery
Current developments
As a developing city largely influenced by economic oil boom, there are many construction projects that are currently being built that will change the city's skyline in the near future. Some of the construction project are SOCAR Tower, the Crescent Development project, Baku White City, Baku National Stadium, Full Moon Hotel, Baku Hilton Hotel, and the Four Seasons Hotel. A lot of the new development has come at the cost of old Soviet-era existing structures. The destruction of the Soviet heritage has created controversy, such as the recent destruction of the Soviet-era 26 Commissars Memorial in 2009 to make way for a new car park.
In 2011, Discovery channel's Extreme Engineering program featured these projects that are under construction in Baku.
Skyline
See also
Architecture of Azerbaijan
References
Baku
Baku | Architecture of Baku | [
"Engineering"
] | 2,405 | [
"Architecture by city",
"Architecture"
] |
27,972,796 | https://en.wikipedia.org/wiki/Paul%20Erd%C5%91s%20Prize | The Paul Erdős Prize (formerly Mathematical Prize) is given to Hungarian mathematicians not older than 40 by the Mathematics Department of the Hungarian Academy of Sciences. It was established and originally funded by Paul Erdős.
Awardees
See also
List of mathematics awards
Sources
The list on the homepage of the Hungarian academy
Paul Erdős
Mathematics awards
Hungarian awards
Awards established in 1973 | Paul Erdős Prize | [
"Technology"
] | 73 | [
"Science and technology awards",
"Mathematics awards"
] |
30,805,871 | https://en.wikipedia.org/wiki/Iofetamine%20%28123I%29 | {{DISPLAYTITLE:Iofetamine (123I)}}
Iofetamine (iodine-123, 123I), brand names Perfusamine, SPECTamine), or N-isopropyl-(123I)-p-iodoamphetamine (IMP), is a lipid-soluble amine and radiopharmaceutical drug used in cerebral blood perfusion imaging with single-photon emission computed tomography (SPECT). Labeled with the radioactive isotope iodine-123, it is approved for use in the United States as a diagnostic aid in determining the localization of and in the evaluation of non-lacunar stroke and complex partial seizures, as well as in the early diagnosis of Alzheimer's disease.
An analogue of amphetamine, iofetamine has shown to inhibit the reuptake of serotonin and norepinephrine as well as induce the release of these neurotransmitters and of dopamine with similar potencies to other amphetamines like d-amphetamine and p-chloroamphetamine. In addition, on account of its high lipophilicity, iofetamine rapidly penetrates the blood–brain barrier. Accordingly, though not known to have been reported in the medical literature, iofetamine might stimulant or entactogen effects. However, it might also be neurotoxic to serotonergic and dopaminergic neurons similarly to other para-halogenated amphetamines.
See also
p-Iodoamphetamine
N-Isopropylamphetamine
References
Substituted amphetamines
Serotonin-norepinephrine-dopamine releasing agents
4-Iodophenyl compounds
Radiopharmaceuticals
Stimulants
Isopropylamino compounds | Iofetamine (123I) | [
"Chemistry"
] | 392 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
30,806,891 | https://en.wikipedia.org/wiki/Selenium%20yeast | Selenium yeast is a feed additive for livestock, used to increase the selenium content in their fodder. It is a form of selenium currently approved for human consumption in the EU and Britain. Inorganic forms of selenium are used in feeds (namely sodium selenate and sodium selenite, which appear to work in roughly the same manner). Since these products can be patented, producers can demand premium prices. It is produced by fermenting Saccharomyces cerevisiae (baker's yeast) in a selenium-rich media.
There is considerable variability in products described as Se-yeast and the selenium compounds found within. Many manufacturers and products on the market are simply mixtures of largely inorganic selenium and some yeast. Selenium is found in different forms based upon the food in which it is found. For instance, the form found in mustard and garlic is different from the form found in wheat or corn. In some products, the added selenium is structurally substituted for sulfur in the amino acid methionine, thus forming an organic chemical called selenomethionine via the same pathways and enzymes. Owing to its similarity to sulfur-containing methionine, selenomethionine is mistaken for an amino acid by the yeast anabolism and incorporated in its proteins. It has been claimed that selenomethionine makes a better source of dietary selenium in animal nutrition, since it is an organic chemical compound sometimes found in some common crops such as wheat.
Animal feed additive
Large amounts of selenium are toxic; however, it is physiologically necessary for animals in extremely small amounts. Many other uncharacterized selenium-containing organic chemicals are also produced by a method similar to that of selenomethionine; some have recently been characterized but remain relatively unknown, such as S-seleno-methyl-glutathione and glutathione-S-selenoglutathione. Due to this, the European Union has questioned the safety and potential toxicity of this food supplement for humans, and it may not be used as an additive after 2002.
G.N Schrauzer, who has written two papers about selenomethionine, claims it should be an essential amino acid, and that the product is completely safe. The European Food Safety Authority does allow the use of selenomethionine as a feed additive for animals. Because organic forms of selenium appear to be excreted from the body slower than inorganic forms, products enriched with organic selenium might detrimentally bioaccumulate in the body. Because selenium-enriched foods contain much more selenium than natural foods, selenium toxicity is a potential problem, and such foods must be treated with caution. The EU allows up to 300 micrograms of selenium per day, but one long-term study of selenium supplementation showed no evidence of toxicity at a dose as high as 800 micrograms per day.
An organic selenium-containing chemical found in selenium yeast has been shown to differ in bioavailability and metabolism compared with common inorganic forms of dietary selenium. Dietary supplementation using selenium yeast is ineffective in the production of antioxidants in bovine milk compared to inorganic selenium (sodium selenate). One study examined if increased selenium in the diet of mutant mice (via a selenium yeast product) caused a higher production of selenium-containing enzymes which have an antioxidant effect. The effect was modest.
Selenium supplementation in yeast form has been shown to increase pig selenium-containing antioxidant enzymes, broiler growth and meat quality, the shelf life of turkey and rooster semen, and possibly cattle fertility.
Selenium supplementation in animal feeds may be profitable for agribusinesses. It may be possible to market selenium-fortified foods to consumers as functional foods, such as selenium-enriched eggs, meat, or milk.
Sel-Plex®
A patented cultivar of yeast (Saccharomyces cerevisiae 'CNCM I-3060') marketed as Sel-Plex® has been approved for use in animal fodder:
U.S. Food and Drug Administration approval for use as a supplement to feed for chickens, turkeys, swine, goats, sheep, horses, dogs, bison, and beef and dairy cows.
Organic Materials Review Institute approval for use as a feed supplement for all animal species.
As of 2006, the European Food Safety Authority's Scientific Panel on Additives and Products or Substances used in Animal Feed allows the use of Sel-Plex® in animal fodder for poultry, swine, and bovines, as the selenium is not significantly bio-accumulated by the human consumer. Only a small amount should be used when blending animal feeds, 10x the authorized maximum selenium intake causes a drop in production. Appropriate measures to minimize inhalation exposure to the product should be taken.
Analytical chemistry
Total selenium in selenium yeast can be reliably determined using open acid digestion to extract selenium from the yeast matrix followed by flame atomic absorption spectrometry. Determination of the selenium species selenomethionine can be achieved via proteolytic digestion of selenium yeast followed by high-performance liquid chromatography with inductively coupled plasma mass spectrometry.
See also
Nutritional muscular dystrophy
References
Selenium
Biology and pharmacology of chemical elements
Yeasts
Food additives
Organoselenium compounds | Selenium yeast | [
"Chemistry",
"Biology"
] | 1,189 | [
"Pharmacology",
"Fungi",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"Yeasts",
"Biochemistry"
] |
30,810,530 | https://en.wikipedia.org/wiki/Robert%20Baker%20%28scientist%29 | Robert Baker, FREng, FIMMM (1938–2004) was a British metallurgist and steelmaker.
Baker was born Handsworth, Sheffield, England, attended Woodhouse Grammar School and graduated with an honours degree in metallurgy from the University of Sheffield in 1960. He stayed on to conduct research on the use of a stabilised zirconia solid electrolyte for the measurement of oxygen activity in molten steel, for which he was awarded a PhD in 1964.
Baker worked for British Steel Corporation for many years, and was appointed Director of Research and Development in 1986 following the retirement of Dr KJ Irvine. Together with colleagues at the company, he was granted patents relating to steel production.
Baker was elected a Fellow of the Royal Academy of Engineering and a Fellow of the Institute of Materials, Minerals and Mining (IMMM). The IMMM also awarded him the Sir Robert Hadfield medal with his colleague GD Spenceley in 1987, and the Bessemer Gold Medal in 1998 for services to the industry.
References
English metallurgists
1938 births
2004 deaths
People from Handsworth, South Yorkshire
Alumni of the University of Sheffield
Steelmaking
Engineers from Yorkshire
Bessemer Gold Medal
Fellows of the Institute of Materials, Minerals and Mining | Robert Baker (scientist) | [
"Chemistry"
] | 254 | [
"Metallurgical processes",
"Bessemer Gold Medal",
"Steelmaking",
"Chemical engineering awards"
] |
30,810,934 | https://en.wikipedia.org/wiki/Wave%20method | In fluid dynamics, the wave method (WM), or wave characteristic method (WCM), is a model describing unsteady flow of fluids in conduits (pipes).
Details of model
The wave method is based on the physically accurate concept that transient pipe flow occurs as a result of pressure waves generated and propagated from a disturbance in the pipe system (valve closure, pump trip, etc.) This method was developed and first described by Don J. Wood in 1966. A pressure wave, which represents a rapid pressure and associated flow change, travels at sonic velocity for the liquid pipe medium, and the wave is partially transmitted and reflected at all discontinuities in the pipe system (pipe junctions, pumps, open or closed ends, surge tanks, etc.) A pressure wave can also be modified by pipe wall resistance. This description is one that closely represents the actual mechanism of transient pipe flow.
Advantages
The WM has the very significant advantage that computations need be made only at nodes in the piping system. Other techniques such as the method of characteristics (MOC) require calculations at equally spaced interior points in a pipeline. This requirement can easily increase the number of calculations by a factor of 10 or more. However, virtually identical solutions are obtained by the WM and the MOC.
See also
EPANET
References
External links
Journal of Applied Fluid Transients
Innovyze Surge Products
Fluid dynamics
Hydraulic engineering | Wave method | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 291 | [
"Hydrology",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Piping",
"Hydraulic engineering",
"Fluid dynamics"
] |
21,707,790 | https://en.wikipedia.org/wiki/Robovirus | A robovirus is a zoonotic virus that is transmitted by a rodent vector (i.e., rodent borne).
Roboviruses mainly belong to the virus families Arenaviridae and Hantaviridae. Like arbovirus (arthropod borne) and tibovirus (tick borne) the name refers to its method of transmission, known as its vector. This is distinguished from a clade, which groups around a common ancestor. Some scientists now refer to arbovirus and robovirus together with the term ArboRobo-virus.
Methods of transmission
Rodent borne disease can be transmitted through different forms of contact such as rodent bites, scratches, urine, saliva, etc. Potential sites of contact with rodents include habitats such as barns, outbuildings, sheds, and dense urban areas. Transmission of disease through rodents can be spread to humans through direct handling and contact, or indirectly through rodents carrying the disease spread to ticks, mites, fleas (arboborne).
Viral diseases transmitted by rodents
One example of a robovirus is hantavirus, which causes hantavirus pulmonary syndrome. Humans can be infected with Hantavirus Pulmonary Syndrome through direct contact with rodent droppings, saliva, or urine infected with strains of the virus. These components mix into the air and get transmitted when inhaled through airborne transmission.
Lassa virus from the Arenaviridae family causes Lassa hemorrhagic fever and is also a robovirus transmitted by the rodent genus Mastomys natalensis. The multimammate rat is able to excrete the virus in its urine and droppings. These rat are often found in the savannas and forests of Africa. When these rats scavenge and enter households this provides an outlet for direct contact transmission with humans. It has also been found that airborne transmission can occur by engaging in cleaning activities such as sweeping. In some areas of Africa, the Mastomys rodent is caught and used as a source of food. This process can also lead to transmission and infection.
Viral diseases indirectly transmitted by rats
Colorado tick fever virus causes high fevers, chills, headache, fatigue and sometimes vomiting, skin rash, and abdominal pain. The virus is caused by a Rocky Mountain wood tick (Dermacentor andersoni). It is an arbovirus, but rodents serve as the reservoir. The tick is carried by five species of rodents: the least chipmunk (Eutamias minimus), Richardson's ground squirrel (Urocitellus richardsonii), deer mice (Peromyscus maniculatus), the golden-mantled ground squirrel (Callospermophilus lateraliss), and the Uinta chipmunk (Neotamias umbrinus). The infected tick will be carried by its rodent host and infect another host (animal or human) as it feeds.
Factors affecting roboviruses
Rodent populations are affected by a number of diverse factors, including climatic conditions. Warmer winters and increased rainfall will make it more likely for rodent populations to survive, therefore increasing the number of rodent reservoirs for disease. Increased rainfall accompanied by flooding can also increase human to rodent contact Global climate change will affect the distribution and prevalence of roboviruses. Inadequate hygiene and sanitation, as seen in some European countries, also contribute to increase rodent populations and higher risks of rodent borne disease transmission.
References
Viruses
Rodent-carried diseases | Robovirus | [
"Biology"
] | 710 | [
"Viruses",
"Tree of life (biology)",
"Microorganisms"
] |
21,708,896 | https://en.wikipedia.org/wiki/Still%20engine | The Still engine was a piston engine that simultaneously used both steam power from an external boiler, and internal combustion from gasoline or diesel, in the same unit. The waste heat from the cylinder and internal combustion exhaust was directed to the steam boiler, resulting in claimed fuel savings of up to 10%.
History
The inventor, William Joseph Still, patented his device in 1917 and on 26 May 1919 in London he and his collaborator Captain Francis Acland (1857–1943, a consulting engineer formerly of the Royal Artillery) announced it at a meeting, chaired by steam turbine inventor Charles Algernon Parsons, at the Royal Society of Arts. Acland described a continuous process by which a double-acting cylinder is powered on one side by internal combustion and on the other by steam from a boiler heated principally by the waste heat from the water jacket and exhaust gases. He explained how the reserve of energy represented by the steam pressure in the boiler provided for any occasional overload which would defeat a standard internal combustion engine of the same power. Independent heating of the boiler was occasionally used, to provide extra power for exceptional conditions, and in the first stage of operation to allow the engine to start itself from steam power alone, even against a load.
Still was not the first in this field; a similar system, whereby compressed air (instead of gearing) was to transfer the power from an internal combustion engine and steam recovered from its cooling system was to augment the compressed air, had been patented in 1903 by Captain Paul Lucas-Girardville (a French military aviator) and Louis Mékarski.
Development
Marine
In 1924 Scotts Shipbuilding and Engineering Company of Greenock, Scotland, put a diesel-fuelled marine version, the Scott-Still regenerative engine, into production, with the first pair of engines installed in the twin-screw M. V. Dolius, of the Blue Funnel Line. The trial was successful and in 1928 Blue Funnel commissioned a larger and faster ship, the Eurybates, with this propulsion system. However the requirement to carry marine engineering officers certified with both steam and motor qualifications, meaning extra crew members and wages, and the extra complexity with consequent higher maintenance costs, offset the fuel savings and conventional diesel engines were later installed in their place.
Railway
In 1926 Kitson and Company, locomotive builders of Leeds, England, produced a steam–diesel hybrid locomotive, the Kitson Still locomotive. This was loaned for trials to the London and North Eastern Railway and used successfully to haul heavy coal trains, but the difference in the cost of coal used by a conventional locomotive, against the fuel oil used by the hybrid, was not great. When Kitson's failed in 1934, a failure to which the development costs of the hybrid locomotive had contributed, the receivers sold the machine for scrap.
Decline
Developments of larger diesel engines in the 1930s, with improved methods of power transmission, meant that the principal advantages of the Still engine – the ability to provide for direct-drive starts from rest and additional power at times of temporary high load – was lost, and further development ended.
References
Steam engines
Diesel engines
Marine propulsion | Still engine | [
"Engineering"
] | 628 | [
"Marine propulsion",
"Marine engineering"
] |
21,710,054 | https://en.wikipedia.org/wiki/Polychloro%20phenoxy%20phenol | Polychloro phenoxy phenols (polychlorinated phenoxy phenols, PCPPs) are a group of organic polyhalogenated compounds. Among them include triclosan and predioxin which can degrade to produce certains types of dioxins and furans. Notably, however, the particular dioxin formed by degradation of triclosan, 2,8-DCDD, was found to be non-toxic in fish embryos.
References
Chloroarenes
Incineration
Phenols
Ethers | Polychloro phenoxy phenol | [
"Chemistry",
"Engineering"
] | 117 | [
"Combustion engineering",
"Incineration",
"Functional groups",
"Organic compounds",
"Ethers"
] |
21,715,204 | https://en.wikipedia.org/wiki/High%20water%20mark | A high water mark is a point that represents the maximum rise of a body of water over land. Such a mark is often the result of a flood, but high water marks may reflect an all-time high, an annual high (highest level to which water rose that year) or the high point for some other division of time. Knowledge of the high water mark for an area is useful in managing the development of that area, particularly in making preparations for flood surges. High water marks from floods have been measured for planning purposes since at least as far back as the civilizations of ancient Egypt. It is a common practice to create a physical marker indicating one or more of the highest water marks for an area, usually with a line at the level to which the water rose, and a notation of the date on which this high water mark was set. This may be a free-standing flood level sign or other marker, or it may be affixed to a building or other structure that was standing at the time of the flood that set the mark.
A high water mark is not necessarily an actual physical mark, but it is possible for water rising to a high point to leave a lasting physical impression such as floodwater staining. A landscape marking left by the high water mark of ordinary tidal action may be called a strandline and is typically composed of debris left by high tide. The area at the top of a beach where debris is deposited is an example of this phenomenon. Where there are tides, this line is formed by the highest position of the tide, and moves up and down the beach on a fortnightly cycle. The debris is chiefly composed of rotting seaweed, but can also include a large amount of litter, either from ships at sea or from sewage outflows.
Ecological significance
The strandline is an important habitat for a variety of animals. In parts of the United Kingdom, sandhoppers such as Talitrus saltator and the seaweed fly Coelopa frigida are abundant in the rotting seaweed, and these invertebrates provide food for shore birds such as the rock pipit, turnstone and pied wagtail, and mammals such as brown hares, foxes, voles and mice.
Legal significance
One kind of high water mark is the ordinary high water mark or average high water mark, the high water mark that can be expected to be produced by a body of water in non-flood conditions. The ordinary high water mark may have legal significance and is often being used to demarcate property boundaries. The ordinary high water mark has also been used for other legal demarcations. For example, a 1651 analysis of laws passed by the English Parliament notes that for persons granted the title Admiral of the English Seas, "the Admirals power extended even to the high water mark, and into the main streams".
In the United States, the high water mark is also significant because the United States Constitution gives Congress the authority to legislate for waterways, and the high water mark is used to determine the geographic extent of that authority. Federal regulations (33 CFR 328.3(e)) define the "ordinary high water mark" (OHWM) as "that line on the shore established by the fluctuations of water and indicated by physical characteristics such as a clear, natural line impressed on the bank, shelving, changes in the character of soil, destruction of terrestrial vegetation, the presence of litter and debris, or other appropriate means that consider the characteristics of the surrounding areas. For the purposes of Section 404 of the Clean Water Act, the OHWM defines the lateral limits of federal jurisdiction over non-tidal water bodies in the absence of adjacent wetlands. For the purposes of Sections 9 and 10 of the Rivers and Harbors Act of 1899, the OHWM defines the lateral limits of federal jurisdiction over traditional navigable waters of the US. The OHWM is used by the United States Army Corps of Engineers, the United States Environmental Protection Agency, and other federal agencies to determine the geographical extent of their regulatory programs. Likewise, many states use similar definitions of the OHWM for the purposes of their own regulatory programs.
In 2016, the Court of Appeals of Indiana ruled that land below the OHWM (as defined by common law) along Lake Michigan is held by the state in trust for public use.
See also
Chart datum
Mean high water
Measuring storm surge
Terrace (geology), benches left by lakes
Wash margin
References
External links
Surveying
Beaches
Hydrology
Lakes
Riparian zone
Rivers
Wetlands
Vertical position | High water mark | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 916 | [
"Vertical position",
"Hydrology",
"Physical quantities",
"Distance",
"Wetlands",
"Surveying",
"Civil engineering",
"Environmental engineering",
"Lakes",
"Riparian zone"
] |
43,069,026 | https://en.wikipedia.org/wiki/Particle%20Physics%20Project%20Prioritization%20Panel | The Particle Physics Project Prioritization Panel (P5) is a scientific advisory panel tasked with recommending plans for U.S. investment in particle physics research over the next ten years, on the basis of various funding scenarios. The P5 is a temporary subcommittee of the High Energy Physics Advisory Panel (HEPAP), which serves the Department of Energy's Office of Science and the National Science Foundation. In 2014, the panel was chaired by Steven Ritz of the University of California, Santa Cruz. In 2023, the panel was chaired by Hitoshi Murayama of the University of California, Berkeley.
2014 report
In 2013, HEPAP was asked to convene a panel (the P5) to evaluate research priorities in the context of anticipated developments in the field globally in the next 20 years. Recommendations were to be made on the basis of three funding scenarios for high-energy physics:
A constant funding level for the next three years followed by an annual 2% increase, relative to the FY2013 budget
A constant funding level for the next three years followed by an annual 3% increase, relative to the proposed FY2014 budget
An unconstrained budget
Science drivers
In May 2014, the first P5 report since 2008 was released. The 2014 report identified five "science drivers"—goals intended to inform funding priorities—drawn from a year-long discussion within the particle physics community. These science drivers are:
Use of the Higgs boson as a tool for further inquiry
Investigation of the physics of neutrino mass
Investigation of the physics of dark matter
Investigation of the physics of dark energy and cosmic inflation
Exploration of new particles, interactions, and physics principles
Recommendations
In pursuit of the five science drivers, the 2014 report identified three "high priority large category" projects meriting significant investment in the FY2014–2023 period, regardless of the broader funding situation: the High Luminosity Large Hadron Collider (a proposed upgrade to the Large Hadron Collider located at CERN in Europe); the International Linear Collider (a proposed electron-positron collider, likely hosted in Japan); and the Long Baseline Neutrino Facility (an expansion of the proposed Long Baseline Neutrino Experiment (that was renamed the Deep Underground Neutrino Experiment), to be constructed at Fermilab in Illinois and at the Homestake Mine in South Dakota).
In addition to these large projects, the report identified numerous smaller projects with potential for near-term return on investment, including the Mu2e experiment, second- and third-generation dark matter experiments, particle-physics components of the Large Synoptic Survey Telescope (LSST), cosmic microwave background experiments, and a number of small neutrino experiments.
The report made several recommendations for significant shifts in priority, namely:
An increase in the proportion of the high-energy physics budget devoted to construction of new facilities, from 15% to 20%-25%
An expansion in scope of the Long Baseline Neutrino Experiment to a major international collaboration, with redirection of resources from other R&D projects to the development of higher powered proton beams for the neutrino facility
Increased funding for second-generation dark matter detection experiments
Increased funding of cosmic microwave background (CMB) research
The panel stressed that the most conservative of the funding scenarios considered would endanger the ability of the U.S. to host a major particle physics project while maintaining the necessary supporting elements.
Impact and outcomes since 2014
A goal of the 2014 P5 exercise was to provide Congress with a science-justified roadmap for project funding. Five years later, in 2019, the Department of Energy Office of Science declared: "Congressional appropriations reflect strong support for P5. Language in appropriations reports have consistently recognized community’s efforts in creating and executing the P5 report strategy" and "P5 was wildly successful." From 2016 to 2020, the High Energy Physics (HEP) budget grew from less than $800 million to more than $1 billion.
However, members of the HEP community were concerned because the increased funding went primarily toward projects, while funding for core research and technology programs, which was also supported by P5, declined from $361 million to $316 million. In 2020, an assessment of progress of the P5-defined program produced by the High Energy Physics Advisory Panel (HEPAP) concluded: "While investments over the past 5 years have
focused on project construction, it will be fundamentally important to balance the components of the HEP budget to continue successful execution of the P5 plan. Operations of the newly constructed experiments require full support to reap their scientific goals. The HEP research program also needs strong support
to fully execute the plan, throughout the construction, operations, and data analysis phases of the experiments, and to lay a foundation for the future."
As of 2022, several of the "Large Projects" identified as priorities by the 2014 P5 had fallen considerably behind schedule or been affected by cost gaps, including:
The Deep Underground Neutrino Experiment, has been descoped, with start-up delayed from 2027 to 2032.
The mu2e experiment was delayed from 2020 to 2026.
The PIP-II project start-up was delayed from 2020 to 2028 startup.
The High Luminosity LHC contributions from Fermilab faced a $90M cost gap in 2021.
The International Linear Collider (ILC), proposed for construction in Japan, was "shelved".
Prelude to the 2023 report
Issues
The P5 process occurred in spring 2023 and was informed by the outcomes of the 2021 Snowmass Process finalized in summer 2022. The Snowmass 2021 study identified two existential threats to the field that P5 must address:
That the field has entered a "nightmare scenario" because no unexpected physics signatures have been observed by experiments at the highest energy accelerator, the Large Hadron Collider. As pointed out by many at the final Snowmass meeting, this give little basis for the 2023 P5 to recommend new large projects.
That LBNF/DUNE (also called the Deep Underground Neutrino Experiment), the flagship project that came out of the 2014 P5, will be reevaluated due to spiraling costs and extended delays. The escalation has led to comparisons to the Superconducting Super Collider (SSC), a particle physics Megaproject that was cancelled mid-way through construction in 1993 due to cost over-run---a debacle with enormous personal and scientific costs to the particle physicists involved.
Along with these major issues, P5 also faces a field that is less unified than in 2014, as was emphasized by the title of the Scientific American report on Snowmass 2021 outcomes: "Physicists Struggle to Unite around Future Plans."
Some members of the field have expressed that the pressure to project a unified opinion is stifling debate, with one physicists telling a reporter from Physics Today: "There are big issues people didn’t discuss." Panel chair Hitoshi Murayama has expressed awareness of this problem, saying that "community buy-in is key" for the success of the P5 report.
Panel
The membership of the 2023 P5 was announced in December 2022, with Hitoshi Murayama of the University of California, Berkeley as head. See the official page.
Similar to 2014, the 2023 P5 members are all particle and accelerator physicists; no members specialize in project management. This places the committee in a good position to evaluate responses to the "nightmare scenario." However, this makes it difficult for the members to assess whether the information on cost and schedule provided to the committee has a sound basis. That lack of expertise may explain how the 2014 P5 failed to foresee the LBNF/DUNE cost-and-schedule crisis, and will make it difficult for the 2023 P5 to head off an "SSC scenario."
Tasking
Regina Rameika from the Department of Energy Office of Science summarized the P5 charge in a presentation to the High Energy Physics Advisory Panel on Dec. 8, 2022. The charge asked P5 to:
Update the 2014 P5 strategic plan, making recommendations for actions within a ten-year time-frame while considering a twenty-year context.
Re-evaluate the 2014 "science drivers" and recommended scientific projects, as well as make the scientific case for new initiatives.
Maintain a balance between large projects and small experiments. P5 does not recommend specific small experiments but was asked to comment on the scientific focus for that portfolio. The emphasis on P5 direction to small experiments was new compared to the 2014 P5 charge.
Address synergies within US programs and with the worldwide program.
The priority of projects is being considered within two funding scenarios from the Department of Energy (DOE) and the National Science Foundation (NSF). The first, which was described by physicists as "grim", envisions a 2% increase per year of the high energy physics budgets for DOE and NSF. The second assumes full funding from the 2022 CHIPS And Science Act
and a 3% increase per year to DOE and NSF HEP. P5 is asked to consider operating costs, including the rising cost of energy to run accelerators.
Input from community meetings and town halls
Throughout 2023, P5 received input from the community through meetings that included invited talks and requested talks in a "town hall" format. Four meetings were held at national laboratories. Two virtual town halls were also held. The topics of the meetings covered physics goals across the range of topics defined by the Snowmass Study, as well as the balance of university- and laboratory-based research, opportunities for early early career scientists, and the need for public outreach.
Input from the International Benchmarking Report
In Autumn 2023, the P5 Panel received input from the HEPAP International Benchmarking Subpanel, headed by Fermilab scientists. This report is one in a series of evaluations of DOE supported science in an international context. Differences between high energy physics and the rest of the physics community are apparent in the report. For example, the report that citations are a poor metric for measurement of scientific impact. Two points made in the report are especially relevant to P5 considerations: 1) The US should prioritize being a "partner of choice" and 2) The US requires a range of project sizes and goals to maintain a healthy "scientific ecosystem".
The primary outcome of the benchmarking report was that "the U.S. is not always viewed as a reliable partner, largely due to unpredictable budgets and inadequate communication, and that shortcomings in domestic HEP programs are jeopardizing U.S. leadership." The report highlighted that the 1993 cancellation of the Superconducting Super Collider and the sudden 2008 termination of the B physics program at the Stanford Linear Accelerator Center, and the abrupt end of the TeVatron program at Fermilab followed by the immediate dismantling of the accelerator have caused the international community to lose confidence that the US will complete projects. Without addressing the DUNE project directly, this recommendation pointed to the potential negative impact on international cooperation if DUNE were abruptly curtailed by P5.
A second major recommendation of the benchmarking report focused on the need to maintain a program of projects at all scales, from small to large, and that are chosen to specifically enhance areas in which the US technology is lagging, such as in accelerator physics. This echoed calls from the community expressed in the P5 Town Halls.
The 2023 report
In December 2023, the 2023 P5 report was released. The proposals contained therein were intended to help better understand some of the current concerns of particle physics, including challenges to the Standard Model, and involve studies primarily dealing with gravity, black holes, dark matter, dark energy, Higgs boson, muons, neutrinos, and more.
Science Drivers and Related Experimental Approaches
The 2023 P5 report identified three science drivers, each with two experimental approaches:
“Decipher the Quantum Realm” through “Elucidat[ing] the Mysteries of Neutrinos” and “Reveal[ing] the Secrets of the Higgs Boson.”
“Explore New Paradigms in Physics” though “Search[ing] for Direct Evidence of New Particles and Pursu[ing] Quantum Imprints of New Phenomena.”
“Illuminate the Hidden Universe” through “Determin[ing] the Nature of Dark Matter” and “Understand[ing] What Drives Cosmic Evolution.”
Specific Recommendations
The recommendations that followed the statement of goals reflected the recommendations heard during the Snowmass process and those of the International Benchmarking Panel, discussed above.
In particular, Recommendation 1 stated “As the highest priority independent of the budget scenarios, [funding agencies must] complete construction projects and support operations of ongoing experiments and research to enable maximum science.” This reflects concerns throughout the community of potential abrupt cancellations of ongoing particle physics projects, as flagged by the Benchmarking Panel.
The P5 report sought to control the narrative of the DUNE project, which has seen an explosion in cost between the 2014 and 2023 P5 reports and is now lagging behind the competing HyperKamiokande project that will turn on in 2027. P5 offered compromises on beam power for DUNE Phase I and reductions of the DUNE Phase II upgrades to keep the project funding on track to begin data-taking in 2031.
Despite the issues with DUNE, P5 recommended initiating work on a new megaproject called a muon collider. Accelerating and colliding muons for particle physics studies offers theoretical advantages over an electron-positron collider, but represents an untested and challenging new direction from a practical standpoint. The report states: “Although we do not know if a muon collider is ultimately feasible, the road toward it leads from current Fermilab strengths and capabilities to a series of proton beam improvements and neutrino beam facilities, each producing world-class science while performing critical R&D towards a muon collider. At the end of the path is an unparalleled global
facility on US soil. This is our Muon Shot.” The cost of a 10 TeV muon collider was not estimated in the report.
The report offered a new emphasis on cosmology and astrophysics as a branch of particle physics. P5 placed the $800M CMB-S4 experiment at the top of the list of new projects. The report also emphasized the importance of the planned expansion of the IceCube neutrino detector in Antarctica, recommending funding for this new project in any budget scenario.
In a recommendation with an unusual level of specifics regarding its implementation, P5 introduced a new program entitled “Advancing Science and Technology through Agile Experiments" (ASTAE). This responds to calls by the community to support “small” experiments, which particle physics defines as costing less than $50M in total. Unlike other programs, this recommendation called for $35M/year to be invested in ASTAE. This recommendation again reflected the concerns identified by the International Benchmarking Panel.
Initial Community Support for the P5 Report
The American Physical Society, Fermi National Accelerator Laboratory and SLAC Laboratory organized endorsements by the community of the P5 report.
As of January 15, the number of endorsers was 2602 US scientists.
Among the endorsers, 37% were tenured faculty level or laboratory scientists, 9% were at the untenured faculty or laboratory scientist level, 16% were postdoctoral fellows, 20% were graduate students, and the remainder were other categories.
The geographic distribution of the endorsements heavily favored Illinois, home of Fermilab, and California, home of SLAC.
Outcomes
Only six months after the release of the 2023 P5 report, the first and sixth priority new projects, CMB-S4 and IceCube-Gen2,
faced major setbacks from a call by NSF to immediately address the urgent need to update the South Pole Station infrastructure. In response, NSF halted the installation of new projects until end of the 2020's. Lack of near-term access to infrastructure at the pole led NSF and DOE to cancel the joint-agency CMB-S4 project, despite strong protest from the P5 leadership and appeals from the 500-person, international team. The IceCube-Gen2 project, planned to begin installation in the latter 2020's, may suffer delays due to the infrastructure renovations.
References
External links
Building for Discovery: Strategic Plan for U.S. Particle Physics in the Global Context: Report of the Particle Physics Project Prioritization Panel (P5)
Scientific funding advisory bodies
Physics organizations
Experimental particle physics
United States Department of Energy
National Science Foundation | Particle Physics Project Prioritization Panel | [
"Physics"
] | 3,465 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
31,773,395 | https://en.wikipedia.org/wiki/Phosphorus%20pentaiodide | Phosphorus pentaiodide is a hypothetical inorganic compound with formula . The existence of this compound has been claimed intermittently since the early 1900s. The claim is disputed: "The pentaiodide does not exist (except perhaps as , but certainly not as ...)".
Claims
Phosphorus pentaiodide was reported to be a brown-black crystalline solid melting at 41 °C produced by the reaction of lithium iodide and phosphorus pentachloride in methyl iodide, however, this claim is disputed and probably generated a mixture of phosphorus triiodide and iodine.
Although phosphorus pentaiodide has been claimed to exist in the form of (tetraiodophosphonium iodide), experimental and theoretical data refutes this claim.
Derivatives
Unlike the elusive , the cation (tetraiodophosphonium cation) is widely known. This cation is known with the anions tetraiodoaluminate , hexafluoroarsenate , hexafluoroantimonate and tetraiodogallate .
References
Hypothetical chemical compounds
Phosphorus(V) compounds
Phosphorus iodides | Phosphorus pentaiodide | [
"Chemistry"
] | 240 | [
"Inorganic compounds",
"Theoretical chemistry stubs",
"Hypotheses in chemistry",
"Inorganic compound stubs",
"Theoretical chemistry",
"Hypothetical chemical compounds"
] |
31,774,509 | https://en.wikipedia.org/wiki/Thiocarboxylic%20acid | In organic chemistry, thiocarboxylic acids or carbothioic acids are organosulfur compounds related to carboxylic acids by replacement of one of the oxygen atoms with a sulfur atom. Two tautomers are possible: a thione form () and a thiol form (). These are sometimes also referred to as "carbothioic O-acid" and "carbothioic S-acid" respectively. Of these the thiol form is most common (e.g. thioacetic acid).
Thiocarboxylic acids are rare in nature, however the biosynthetic components for producing them appear widespread in bacteria. Examples include pyridine-2,6-dicarbothioic acid, and thioquinolobactin.
Synthesis
Thiocarboxylic acids are typically prepared by salt metathesis from the acid chloride, as in the following conversion of benzoyl chloride to thiobenzoic acid using potassium hydrosulfide according to the following idealized equation:
Covalent sulfides, such as P2S5, generally give poor yields unless catalyzed with triphenylstibine oxide.
2,6-Pyridinedicarbothioic acid is synthesized by treating the diacid dichloride with a solution of H2S in pyridine:
This reaction produces the orange pyridinium salt of pyridinium-2,6-dicarbothioate. Treatment of this salt with sulfuric acid gives colorless the bis(thiocarboxylic acid), which can then be extracted with dichloromethane.
Reactions
At neutral pH, thiocarboxylic acids are fully ionized. Thiocarboxylic acids are about 100 times more acidic than the analogous carboxylic acids. Thiobenzoic acid has a pKa of 2.48 compared with 4.20 for benzoic acid, and thioacetic acid has a pKa near 3.4 compared with 4.72 for acetic acid. Alkylation of the corresponding thioate ion gives a thioester.
The conjugate base of thioacetic acid, thioacetate, is a reagent used for installing thiol groups via the displacement of alkyl halides by a two-step process. The halide is displaced to give a thioester intermediate, which is then hydrolyzed:
Thiocarboxylic acids react with various nitrogen functional groups, such as organic azide, nitro, and isocyanate compounds, to give amides under mild conditions. This method avoids needing the amine to initiate an amide-forming acyl substitution but does requires synthesis and handling of the unstable thiocarboxylic acid. Unlike the Schmidt reaction or other nucleophilic-attack pathways, reaction with an aryl or alkyl azide begins with a [3+2] cycloaddition. The resulting heterocycle expels N2 and the sulfur atom to give the monosubstituted amide.
Halogens or their equivalents (e.g. sulfuryl chloride) oxidize thiocarboxylic acids to acylsulfenyl halides. The latter are unstable, and decay over the course of several hours to the free halogen and the diacyl disulfide.
See also
Dithiocarboxylic acid
Thiocarbamate
Thiocarbonate
Thiocarbonic acid
Thioformic acid, the simplest thiocarboxylic acid
References
Functional groups | Thiocarboxylic acid | [
"Chemistry"
] | 757 | [
"Functional groups"
] |
5,348,452 | https://en.wikipedia.org/wiki/Quasi-perfect%20equilibrium | Quasi-perfect equilibrium is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme.
Informally, a player playing by a strategy from a quasi-perfect equilibrium takes observed as well as potential future mistakes of his opponents into account but assumes that he himself will not make a mistake in the future, even if he observes that he has done so in the past.
Quasi-perfect equilibrium is a further refinement of sequential equilibrium. It is itself refined by normal form proper equilibrium.
Mertens' voting game
It has been argued by Jean-François Mertens that quasi-perfect equilibrium is superior to Reinhard Selten's notion of extensive-form trembling hand perfect equilibrium as a quasi-perfect equilibrium is guaranteed to describe admissible behavior. In contrast, for a certain two-player voting game no extensive-form trembling hand perfect equilibrium describes admissible behavior for both players.
The voting game suggested by Mertens may be described as follows:
Two players must elect one of them to perform an effortless task. The task may be performed either correctly or incorrectly. If it is performed correctly, both players receive a payoff of 1, otherwise both players receive a payoff of 0. The election is by a secret vote. If both players vote for the same player, that player gets to perform the task. If each player votes for himself, the player to perform the task is chosen at random but is not told that he was elected this way. Finally, if each player votes for the other, the task is performed by somebody else, with no possibility of it being performed incorrectly.
In the unique quasi-perfect equilibrium for the game, each player votes for himself and, if elected, performs the task correctly. This is also the unique admissible behavior. But in any extensive-form trembling hand perfect equilibrium, at least one of the players believes that
he is at least as likely as the other player to tremble and perform the task incorrectly and hence votes for the other player.
The example illustrates that being a limit of equilibria of perturbed games, an extensive-form trembling hand perfect equilibrium implicitly assumes an agreement between the players about the relative magnitudes of future trembles. It also illustrates that such an assumption may be unwarranted and undesirable.
References
Game theory equilibrium concepts | Quasi-perfect equilibrium | [
"Mathematics"
] | 471 | [
"Game theory",
"Game theory equilibrium concepts"
] |
5,348,493 | https://en.wikipedia.org/wiki/Secondary%20treatment | Secondary treatment (mostly biological wastewater treatment) is the removal of biodegradable organic matter (in solution or suspension) from sewage or similar kinds of wastewater. The aim is to achieve a certain degree of effluent quality in a sewage treatment plant suitable for the intended disposal or reuse option. A "primary treatment" step often precedes secondary treatment, whereby physical phase separation is used to remove settleable solids. During secondary treatment, biological processes are used to remove dissolved and suspended organic matter measured as biochemical oxygen demand (BOD). These processes are performed by microorganisms in a managed aerobic or anaerobic process depending on the treatment technology. Bacteria and protozoa consume biodegradable soluble organic contaminants (e.g. sugars, fats, and organic short-chain carbon molecules from human waste, food waste, soaps and detergent) while reproducing to form cells of biological solids. Secondary treatment is widely used in sewage treatment and is also applicable to many agricultural and industrial wastewaters.
Secondary treatment systems are classified as fixed-film or suspended-growth systems, and as aerobic versus anaerobic. Fixed-film or attached growth systems include trickling filters, constructed wetlands, bio-towers, and rotating biological contactors, where the biomass grows on media and the sewage passes over its surface. The fixed-film principle has further developed into moving bed biofilm reactors (MBBR) and Integrated Fixed-Film Activated Sludge (IFAS) processes. Suspended-growth systems include activated sludge, which is an aerobic treatment system, based on the maintenance and recirculation of a complex biomass composed of micro-organisms (bacteria and protozoa) able to absorb and adsorb the organic matter carried in the wastewater. Constructed wetlands are also being used. An example for an anaerobic secondary treatment system is the upflow anaerobic sludge blanket reactor.
Fixed-film systems are more able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems. Most of the aerobic secondary treatment systems include a secondary clarifier to settle out and separate biological floc or filter material grown in the secondary treatment bioreactor.
Definitions
Primary treatment
Secondary treatment
Primary treatment settling removes about half of the solids and a third of the BOD from raw sewage. Secondary treatment is defined as the "removal of biodegradable organic matter (in solution or suspension) and suspended solids. Disinfection is also typically included in the definition of conventional secondary treatment." Biological nutrient removal is regarded by some sanitary engineers as secondary treatment and by others as tertiary treatment.
After this kind of treatment, the wastewater may be called secondary-treated wastewater.
Tertiary treatment
Process types
Secondary treatment systems are classified as fixed-film or suspended-growth systems A great number of secondary treatment processes exist, see List of wastewater treatment technologies. The main ones are explained below.
Fixed film systems
Filter beds (oxidizing beds)
In older plants and those receiving variable loadings, trickling filter beds are used where the settled sewage liquor is spread onto the surface of a bed made up of coke (carbonized coal), limestone chips or specially fabricated plastic media. Such media must have large surface areas to support the biofilms that form. The liquor is typically distributed through perforated spray arms. The distributed liquor trickles through the bed and is collected in drains at the base. These drains also provide a source of air which percolates up through the bed, keeping it aerobic. Biofilms of bacteria, protozoa and fungi form on the media’s surfaces and eat or otherwise reduce the organic content. The filter removes a small percentage of the suspended organic matter, while the majority of the organic matter supports microorganism reproduction and cell growth from the biological oxidation and nitrification taking place in the filter. With this aerobic oxidation and nitrification, the organic solids are converted into biofilm grazed by insect larvae, snails, and worms which help maintain an optimal thickness. Overloading of beds may increase biofilm thickness leading to anaerobic conditions and possible bioclogging of the filter media and ponding on the surface.
Rotating biological contactors
Constructed wetlands
Suspended growth systems
Activated sludge
Activated sludge is a common suspended-growth method of secondary treatment. Activated sludge plants encompass a variety of mechanisms and processes using dissolved oxygen to promote growth of biological floc that substantially removes organic material. Biological floc is an ecosystem of living biota subsisting on nutrients from the inflowing primary clarifier effluent. These mostly carbonaceous dissolved solids undergo aeration to be broken down and either biologically oxidized to carbon dioxide or converted to additional biological floc of reproducing micro-organisms. Nitrogenous dissolved solids (amino acids, ammonia, etc.) are similarly converted to biological floc or oxidized by the floc to nitrites, nitrates, and, in some processes, to nitrogen gas through denitrification. While denitrification is encouraged in some treatment processes, denitrification often impairs the settling of the floc causing poor quality effluent in many suspended aeration plants. Overflow from the activated sludge mixing chamber is sent to a clarifier where the suspended biological floc settles out while the treated water moves into tertiary treatment or disinfection. Settled floc is returned to the mixing basin to continue growing in primary effluent. Like most ecosystems, population changes among activated sludge biota can reduce treatment efficiency. Nocardia, a floating brown foam sometimes misidentified as sewage fungus, is the best known of many different fungi and protists that can overpopulate the floc and cause process upsets. Elevated concentrations of toxic wastes including pesticides, industrial metal plating waste, or extreme pH, can kill the biota of an activated sludge reactor ecosystem.
Sequencing batch reactors
One type of system that combines secondary treatment and settlement is the cyclic activated sludge (CASSBR), or sequencing batch reactor (SBR). Typically, activated sludge is mixed with raw incoming sewage, and then mixed and aerated. The settled sludge is run off and re-aerated before a proportion is returned to the headworks.
The disadvantage of the CASSBR process is that it requires a precise control of timing, mixing and aeration. This precision is typically achieved with computer controls linked to sensors. Such a complex, fragile system is unsuited to places where controls may be unreliable, poorly maintained, or where the power supply may be intermittent.
Package plants
Extended aeration package plants use separate basins for aeration and settling, and are somewhat larger than SBR plants with reduced timing sensitivity.
Membrane bioreactors
Membrane bioreactors (MBR) are activated sludge systems using a membrane liquid-solid phase separation process. The membrane component uses low pressure microfiltration or ultrafiltration membranes and eliminates the need for a secondary clarifier or filtration. The membranes are typically immersed in the aeration tank; however, some applications utilize a separate membrane tank. One of the key benefits of an MBR system is that it effectively overcomes the limitations associated with poor settling of sludge in conventional activated sludge (CAS) processes. The technology permits bioreactor operation with considerably higher mixed liquor suspended solids (MLSS) concentration than CAS systems, which are limited by sludge settling. The process is typically operated at MLSS in the range of 8,000–12,000 mg/L, while CAS are operated in the range of 2,000–3,000 mg/L. The elevated biomass concentration in the MBR process allows for very effective removal of both soluble and particulate biodegradable materials at higher loading rates. Thus increased sludge retention times, usually exceeding 15 days, ensure complete nitrification even in extremely cold weather.
The cost of building and operating an MBR is often higher than conventional methods of sewage treatment. Membrane filters can be blinded with grease or abraded by suspended grit and lack a clarifier's flexibility to pass peak flows. The technology has become increasingly popular for reliably pretreated waste streams and has gained wider acceptance where infiltration and inflow have been controlled, however, and the life-cycle costs have been steadily decreasing. The small footprint of MBR systems, and the high quality effluent produced, make them particularly useful for water reuse applications.
Aerobic granulation
Aerobic granular sludge can be formed by applying specific process conditions that favour slow growing organisms such as PAOs (polyphosphate accumulating organisms) and GAOs (glycogen accumulating organisms). Another key part of granulation is selective wasting whereby slow settling floc-like sludge is discharged as waste sludge and faster settling biomass is retained. This process has been commercialized as Nereda process.
Surface-aerated lagoons or ponds
Aerated lagoons are a low technology suspended-growth method of secondary treatment using motor-driven aerators floating on the water surface to increase atmospheric oxygen transfer to the lagoon and to mix the lagoon contents. The floating surface aerators are typically rated to deliver the amount of air equivalent to 1.8 to 2.7 kg O2/kW·h. Aerated lagoons provide less effective mixing than conventional activated sludge systems and do not achieve the same performance level. The basins may range in depth from 1.5 to 5.0 metres. Surface-aerated basins achieve 80 to 90 percent removal of BOD with retention times of 1 to 10 days. Many small municipal sewage systems in the United States (1 million gal./day or less) use aerated lagoons.
Emerging technologies
Biological Aerated (or Anoxic) Filter (BAF) or Biofilters combine filtration with biological carbon reduction, nitrification or denitrification. BAF usually includes a reactor filled with a filter media. The media is either in suspension or supported by a gravel layer at the foot of the filter. The dual purpose of this media is to support highly active biomass that is attached to it and to filter suspended solids. Carbon reduction and ammonia conversion occurs in aerobic mode and sometime achieved in a single reactor while nitrate conversion occurs in anoxic mode. BAF is operated either in upflow or downflow configuration depending on design specified by manufacturer.
Integrated Fixed-Film Activated Sludge
Moving Bed Biofilm Reactors (MBBRs) typically requires smaller footprint than suspended-growth systems.
Design considerations
The United States Environmental Protection Agency (EPA) defined secondary treatment based on the performance observed at late 20th-century bioreactors treating typical United States municipal sewage. Secondary treated sewage is expected to produce effluent with a monthly average of less than 30 mg/L BOD and less than 30 mg/L suspended solids. Weekly averages may be up to 50 percent higher. A sewage treatment plant providing both primary and secondary treatment is expected to remove at least 85 percent of the BOD and suspended solids from domestic sewage. The EPA regulations describe stabilization ponds as providing treatment equivalent to secondary treatment removing 65 percent of the BOD and suspended solids from incoming sewage and discharging approximately 50 percent higher effluent concentrations than modern bioreactors. The regulations also recognize the difficulty of meeting the specified removal percentages from combined sewers, dilute industrial wastewater, or Infiltration/Inflow.
Process upsets
Process upsets are temporary decreases in treatment plant performance caused by significant population change within the secondary treatment ecosystem. Conditions likely to create upsets include toxic chemicals and unusually high or low concentrations of organic waste BOD providing food for the bioreactor ecosystem.
Measures creating uniform wastewater loadings tend to reduce the probability of upsets. Fixed-film or attached growth secondary treatment bioreactors are similar to a plug flow reactor model circulating water over surfaces colonized by biofilm, while suspended-growth bioreactors resemble a continuous stirred-tank reactor keeping microorganisms suspended while water is being treated. Secondary treatment bioreactors may be followed by a physical phase separation to remove biological solids from the treated water. Upset duration of fixed film secondary treatment systems may be longer because of the time required to recolonize the treatment surfaces. Suspended growth ecosystems may be restored from a population reservoir. Activated sludge recycle systems provide an integrated reservoir if upset conditions are detected in time for corrective action. Sludge recycle may be temporarily turned off to prevent sludge washout during peak storm flows when dilution keeps BOD concentrations low. Suspended growth activated sludge systems can be operated in a smaller space than fixed-film trickling filter systems that treat the same amount of water; but fixed-film systems are better able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems.
Wastewater flow variations may be reduced by limiting stormwater collection by the sewer system, and by requiring industrial facilities to discharge batch process wastes to the sewer over a time interval rather than immediately after creation. Discharge of appropriate organic industrial wastes may be timed to sustain the secondary treatment ecosystem through periods of low residential waste flow. Sewage treatment systems experiencing holiday waste load fluctuations may provide alternative food to sustain secondary treatment ecosystems through periods of reduced use. Small facilities may prepare a solution of soluble sugars. Others may find compatible agricultural wastes, or offer disposal incentives to septic tank pumpers during low use periods.
Toxicity
Waste containing biocide concentrations exceeding the secondary treatment ecosystem tolerance level may kill a major fraction of one or more important ecosystem species. BOD reduction normally accomplished by that species temporarily ceases until other species reach a suitable population to utilize that food source, or the original population recovers as biocide concentrations decline.
Dilution
Waste containing unusually low BOD concentrations may fail to sustain the secondary treatment population required for normal waste concentrations. The reduced population surviving the starvation event may be unable to completely utilize available BOD when waste loads return to normal. Dilution may be caused by addition of large volumes of relatively uncontaminated water such as stormwater runoff into a combined sewer. Smaller sewage treatment plants may experience dilution from cooling water discharges, major plumbing leaks, firefighting, or draining large swimming pools.
A similar problem occurs as BOD concentrations drop when low flow increases waste residence time within the secondary treatment bioreactor. Secondary treatment ecosystems of college communities acclimated to waste loading fluctuations from student work/sleep cycles may have difficulty surviving school vacations. Secondary treatment systems accustomed to routine production cycles of industrial facilities may have difficulty surviving industrial plant shutdown. Populations of species feeding on incoming waste initially decline as concentration of those food sources decrease. Population decline continues as ecosystem predator populations compete for a declining population of lower trophic level organisms.
Peak waste load
High BOD concentrations initially exceed the ability of the secondary treatment ecosystem to utilize available food. Ecosystem populations of aerobic organisms increase until oxygen transfer limitations of the secondary treatment bioreactor are reached. Secondary treatment ecosystem populations may shift toward species with lower oxygen requirements, but failure of those species to use some food sources may produce higher effluent BOD concentrations. More extreme increases in BOD concentrations may drop oxygen concentrations before the secondary treatment ecosystem population can adjust, and cause an abrupt population decrease among important species. Normal BOD removal efficiency will not be restored until populations of aerobic species recover after oxygen concentrations rise to normal.
Temperature
Biological oxidation processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 °C and 32 °C.
See also
List of wastewater treatment technologies
Sanitation
References
Sources
Aquatic ecology
Environmental engineering
Pollution control technologies
Sewerage
Sanitation
Sewerage infrastructure | Secondary treatment | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 3,252 | [
"Water treatment",
"Chemical engineering",
"Sewerage infrastructure",
"Pollution control technologies",
"Water pollution",
"Sewerage",
"Civil engineering",
"Ecosystems",
"Environmental engineering",
"Aquatic ecology"
] |
5,349,853 | https://en.wikipedia.org/wiki/Gibbs%20isotherm | The Gibbs adsorption isotherm for multicomponent systems is an equation used to relate the changes in concentration of a component in contact with a surface with changes in the surface tension, which results in a corresponding change in surface energy. For a binary system, the Gibbs adsorption equation in terms of surface excess is
where
is the surface tension,
is the surface excess concentration of component i,
is the chemical potential of component i.
Adsorption
Different influences at the interface may cause changes in the composition of the near-surface layer. Substances may either accumulate near the surface or, conversely, move into the bulk. The movement of the molecules characterizes the phenomena of adsorption. Adsorption influences changes in surface tension and colloid stability. Adsorption layers at the surface of a liquid dispersion medium may affect the interactions of the dispersed particles in the media, and consequently these layers may play crucial role in colloid stability The adsorption of molecules of liquid phase at an interface occurs when this liquid phase is in contact with other immiscible phases that may be gas, liquid, or solid
Conceptual explanation of equation
Surface tension describes how difficult it is to extend the area of a surface (by stretching or distorting it). If surface tension is high, there is a large free energy required to increase the surface area, so the surface will tend to contract and hold together like a rubber sheet.
There are various factors affecting surface tension, one of which is that the composition of the surface may be different from the bulk. For example, if water is mixed with a tiny amount of surfactants (for example, hand soap), the bulk water may be 99% water molecules and 1% soap molecules, but the topmost surface of the water may be 50% water molecules and 50% soap molecules. In this case, the soap has a large and positive "surface excess". In other examples, the surface excess may be negative: For example, if water is mixed with an inorganic salt like sodium chloride, the surface of the water is on average less salty and more pure than the bulk average.
Consider again the example of water with a bit of soap. Since the water surface needs to have higher concentration of soap than the bulk, whenever the water's surface area is increased, it is necessary to remove soap molecules from the bulk and add them to the new surface. If the concentration of soap is increased a bit, the soap molecules are more readily available (they have higher chemical potential), so it is easier to pull them from the bulk in order to create the new surface. Since it is easier to create new surface, the surface tension is lowered. The general principle is:
When the surface excess of a component is positive, increasing the chemical potential of that component reduces the surface tension.
Next consider the example of water with salt. The water surface is less salty than bulk, so whenever the water's surface area is increased, it is necessary to remove salt molecules from the new surface and push them into bulk. If the concentration of salt is increased a bit (raising the salt's chemical potential), it becomes harder to push away the salt molecules. Since it is now harder to create the new surface, the surface tension is higher. The general principle is:
When the surface excess of a component is negative, increasing the chemical potential of that component increases the surface tension.
The Gibbs isotherm equation gives the exact quantitative relationship for these trends.
Location of surface and defining surface excess
Location of surface
In the presence of two phases ( and ), the surface (surface phase) is located in between the phase and phase . Experimentally, it is difficult to determine the exact structure of an inhomogeneous surface phase that is in contact with a bulk liquid phase containing more than one solute. Inhomogeneity of the surface phase is a result of variation of mole ratios. A model proposed by Josiah Willard Gibbs proposed that the surface phase as an idealized model that had zero thickness. In reality, although the bulk regions of and phases are constant, the concentrations of components in the interfacial region will gradually vary from the bulk concentration of to the bulk concentration of over the distance x. This is in contrast to the idealized Gibbs model where the distance x takes on the value of zero. The diagram to the right illustrates the differences between the real and idealized models.
Definition of surface excess
In the idealized model, the chemical components of the and bulk phases remain unchanged except when approaching the dividing surface. The total moles of any component (Examples include: water, ethylene glycol etc.) remains constant in the bulk phases but varies in the surface phase for the real system model as shown below.
In the real system, however, the total moles of a component varies depending on the arbitrary placement of the dividing surface. The quantitative measure of adsorption of the -th component is captured by the surface excess quantity. The surface excess represents the difference between the total moles of the -th component in a system and the moles of the -th component in a particular phase (either or ) and is represented by:
where is the surface excess of the -th component, are the moles, and are the phases, and is the area of the dividing surface.
represents excess of solute per unit area of the surface over what would be present if the bulk concentration prevailed all the way to the surface, it can be positive, negative or zero. It has units of mol/m2.
Relative surface excess
Relative Surface Excess quantities are more useful than arbitrary surface excess quantities. The Relative surface excess relates the adsorption at the interface to a solvent in the bulk phase. An advantage of using the relative surface excess quantities is that they don't depend on the location of the dividing surface. The relative surface excess of species and solvent 1 is therefore:
The Gibbs adsorption isotherm equation
Derivation of the Gibbs adsorption equation
For a two-phase system consisting of the and phase in equilibrium with a surface dividing the phases, the total Gibbs free energy of a system can be written as:
where is the Gibbs free energy.
The equation of the Gibbs Adsorption Isotherm can be derived from the “particularization to the thermodynamics of the Euler theorem on homogeneous first-order forms.” The Gibbs free energy of each phase , phase , and the surface phase can be represented by the equation:
where is the internal energy, is the pressure, is the volume, is the temperature, is the entropy, and is the chemical potential of the -th component.
By taking the total derivative of the Euler form of the Gibbs equation for the phase, phase and the surface phase:
where is the area of the dividing surface, and is the surface tension.
For reversible processes, the first law of thermodynamics requires that:
where is the heat energy and is the work.
Substituting the above equation into the total derivative of the Gibbs energy equation and by utilizing the result is equated to the non-pressure volume work when surface energy is considered:
by utilizing the fundamental equation of Gibbs energy of a multicomponent system:
The equation relating the phase, phase and the surface phase becomes:
When considering the bulk phases ( phase, phase), at equilibrium at constant temperature and pressure the Gibbs–Duhem equation requires that:
The resulting equation is the Gibbs adsorption isotherm equation:
The Gibbs adsorption isotherm is an equation which could be considered an adsorption isotherm that connects surface tension of a solution with the concentration of the solute.
For a binary system containing two components the Gibbs Adsorption Equation in terms of surface excess is:
Relation between surface tension and the surface excess concentration
The chemical potential of species in solution, , depends on its activity by the following equation:
where is the chemical potential of the -th component at a reference state, is the gas constant and is the temperature.
Differentiation of the chemical potential equation results in:
where is the activity coefficient of component , and is the concentration of species in the bulk phase.
If the solutions in the and phases are dilute (rich in one particular component ) then activity coefficient of the component approaches unity and the Gibbs isotherm becomes:
The above equation assumes the interface to be bidimensional, which is not always true. Further models, such as Guggenheim's, correct this flaw.
Ionic dissociation effects
Gibbs equation for electrolyte adsorption
Consider a system composed of water that contains an organic electrolyte RNaz and an inorganic electrolyte NaCl that both dissociate completely such that:
The Gibbs Adsorption equation in terms of the relative surface excess becomes:
The Relation Between Surface Tension and The Surface Excess Concentration becomes:
where is the coefficient of the Gibbs adsorption. Values of are calculated using the Double layer (interfacial) models of Helmholtz, Gouy, and Stern.
Substances can have different effects on surface tension as shown :
No effect, for example sugar
Increase of surface tension, inorganic salts
Decrease surface tension progressively, alcohols
Decrease surface tension and, once a minimum is reached, no more effect: surfactants
Therefore, the Gibbs isotherm predicts that inorganic salts have negative surface concentrations. However, this view has been challenged extensively in recent years due to a combination of more precise interfacially sensitive experiments and theoretical models, both of which predict an increase in surface propensity of the halides with increasing size and polarizability. As such, surface tension is not a reliable method for determining the relative propensity of ions toward the air-water interface.
A method for determining surface concentrations is needed in order to prove the validity of the model: two different techniques are normally used: ellipsometry and following the decay of 14C present in the surfactant molecules.
Gibbs isotherm for ionic surfactants
Ionic surfactants require special considerations, as they are electrolytes:
In the absence of extra electrolytes
where refers to the surface concentration of surfactant molecules, without considering the counter ion.
In the presence of added electrolytes
Experimental methods
The extent of adsorption at a liquid interface can be evaluated using the surface tension concentration data and the Gibbs adsorption equation. The microtome blade method is used to determine the weight and molal concentration of an interface. The method involves attaining a one square meter portion of air-liquid interface of binary solutions using a microtome blade.
Another method that is used to determine the extent of adsorption at an air-water interface is the emulsion technique, which can be used to estimate the relative surface excess with respect to water.
Additionally, the Gibbs surface excess of a surface active component for an aqueous solution can be found using the radioactive tracer method. The surface active component is usually labeled with carbon-14 or sulfur-35.
References
Surface science | Gibbs isotherm | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,238 | [
"Condensed matter physics",
"Surface science"
] |
5,350,008 | https://en.wikipedia.org/wiki/Steroidogenic%20acute%20regulatory%20protein | The steroidogenic acute regulatory protein, commonly referred to as StAR (STARD1), is a transport protein that regulates cholesterol transfer within the mitochondria, which is the rate-limiting step in the production of steroid hormones. It is primarily present in steroid-producing cells, including theca cells and luteal cells in the ovary, Leydig cells in the testis and cell types in the adrenal cortex.
Function
Cholesterol needs to be transferred from the outer mitochondrial membrane to the inner membrane where cytochrome P450scc enzyme (CYP11A1) cleaves the cholesterol side chain, which is the first enzymatic step in all steroid synthesis. The aqueous phase between these two membranes cannot be crossed by the lipophilic cholesterol, unless certain proteins assist in this process. A number of proteins have historically been proposed to facilitate this transfer including: sterol carrier protein 2 (SCP2), steroidogenic activator polypeptide (SAP), peripheral benzodiazepine receptor (PBR or translocator protein, TSPO), and StAR. It is now clear that this process is primarily mediated by the action of StAR.
The mechanism by which StAR causes cholesterol movement remains unclear as it appears to act from the outside of the mitochondria and its entry into the mitochondria ends its function. Various hypotheses have been advanced. Some involve StAR transferring cholesterol itself like a shuttle. While StAR may bind cholesterol itself, the exorbitant number of cholesterol molecules that the protein transfers would indicate that it would have to act as a cholesterol channel instead of a shuttle. Another notion is that it causes cholesterol to be kicked out of the outer membrane to the inner (cholesterol desorption). StAR may also promote the formation of contact sites between the outer and inner mitochondrial membranes to allow cholesterol influx. Another suggests that StAR acts in conjunction with PBR, causing the movement of Cl− out of the mitochondria to facilitate contact site formation. However, evidence for an interaction between StAR and PBR remains elusive.
Structure
In humans, the gene for StAR is located on chromosome 8p11.23 and the protein has 285 amino acids. The signal sequence of StAR that targets it to the mitochondria is clipped off in two steps with import into the mitochondria. Phosphorylation at the serine at position 195 increases its activity.
The domain of StAR important for promoting cholesterol transfer is the StAR-related transfer domain (START domain). StAR is the prototypic member of the START domain family of proteins and is thus also known as STARD1 for "START domain-containing protein 1". It is hypothesized that the START domain forms a pocket in StAR that binds single cholesterol molecules for delivery to P450scc.
The closest homolog to StAR is MLN64 (STARD3). Together they comprise the StarD1/D3 subfamily of START domain-containing proteins.
Production
StAR is a mitochondrial protein that is rapidly synthesized in response to stimulation of the cell to produce steroid. Hormones that stimulate its production depend on the cell type and include luteinizing hormone (LH), ACTH and angiotensin II.
At the cellular level, StAR is synthesized typically in response to activation of the cAMP second messenger system, although other systems can be involved even independently of cAMP.
StAR has thus far been found in all tissues that can produce steroids, including the adrenal cortex, the gonads, the brain and the nonhuman placenta. One known exception is the human placenta.
Substances that suppress StAR activity, like those listed below, can cause endocrine disrupting effects, including altered steroid hormone levels and fertility.
Alcohol
DEHP and DBP
Permethrin and cypermethrin
DES and arsenite
BPA
Pathology
Mutations in the gene for StAR cause lipoid congenital adrenal hyperplasia (lipoid CAH), in which patients produce little steroid and can die shortly after birth. Mutations that less severely affect the function of StAR result in nonclassic lipoid CAH or familial glucocorticoid deficiency type 3.
All known mutations disrupt StAR function by altering its START domain. In the case of StAR mutation, the phenotype does not present until birth since human placental steroidogenesis is independent of StAR.
At the cellular level, the lack of StAR results in a pathologic accumulation of lipid within cells, especially noticeable in the adrenal cortex as seen in the mouse model. The testes are undescended and the resident steroidogenic Leydig cells are modestly affected. Early in life, the ovary is spared as it does not express StAR until puberty. After puberty, lipid accumulations and hallmarks of ovarian failure are noted.
StAR-independent steroidogenesis
While loss of functional StAR in the human and the mouse catastrophically reduces steroid production, it does not eliminate all of it, indicating the existence of StAR-independent pathways for steroid generation. Aside from the human placenta, these pathways are considered minor for endocrine production.
It is unclear what factors catalyze StAR-independent steroidogenesis. Candidates include oxysterols which can be freely converted to steroid and the ubiquitous MLN64.
New roles
Recent findings suggest that StAR may also traffic cholesterol to a second mitochondrial enzyme, sterol 27-hydroxylase. This enzyme converts cholesterol to 27-hydroxycholesterol. In this way it may be important for the first step in one of the two pathways for the production of bile acids by the liver (the alternative pathway).
Evidence also shows that the presence of StAR in a type of immune cell, the macrophage, where it can stimulate the production of 27-hydroxycholesterol. In this case, 27-hydroxycholesterol may by itself be helpful against the production of inflammatory factors associated with cardiovascular disease. It is important to note that no study has yet found a link between the loss of StAR and problems in bile acid production or increased risk for cardiovascular disease.
Recently StAR was found to be expressed in cardiac fibroblasts in response to ischemic injury due to myocardial infarction. In these cells it has no apparent de novo steroidogenic activity, as evidenced by the lack of the key steroidogenic enzymes cytochrome P450 side chain cleavage (CYP11A1) and 3 beta hydroxysteroid dehydrogenase (3βHSD). StAR was found to have an anti-apoptotic effect on the fibroblasts, which may allow them to survive the initial stress of the infarct, differentiate and function in tissue repair at the infarction site.
History
The StAR protein was first identified, characterized and named by Douglas Stocco at Texas Tech University Health Sciences Center in 1994. The role of this protein in lipoid CAH was confirmed the following year in collaboration with Walter Miller at the University of California, San Francisco. All of this work follows the initial observations of the appearance of this protein and its phosphorylated form coincident with factors that caused steroid production by Nanette Orme-Johnson while at Tufts University.
See also
Steroidogenic enzyme
References
External links
Water-soluble transporters
Peripheral membrane proteins
Steroid hormone biosynthesis | Steroidogenic acute regulatory protein | [
"Chemistry",
"Biology"
] | 1,567 | [
"Steroid hormone biosynthesis",
"Biosynthesis"
] |
5,350,337 | https://en.wikipedia.org/wiki/Iron%20pipe%20size | Iron Pipe Size (IPS or I.P.S.) pipe sizing system based on the inside diameter (ID) of pipe. It was widely used from the early 19th century to the mid 20th century and is still in use by some industries, including major PVC pipe manufacturers, as well as for some legacy drawings and equipment.
The iron pipe size standard came into being early in the 19th century and remained in effect until after World War II. The IPS system was primarily used in the United States and the United Kingdom. In the 1920s, the Copper Tube Size (CTS) standard was combined with the IPS standard.
During the IPS period, pipes were cast in halves and welded together, and pipe sizes referred to the inside diameters. The inside diameters under IPS were roughly the same as the more modern Ductile Iron Pipe Standard (DIPS) and Nominal Pipe Size (NPS) Standards, and some of the wall thicknesses were also retained with a different designator. In 1948, the DIPS came into effect, when greater control of a pipe's wall thickness was possible.
CTS diameter always specifies the outside diameter (OD) of a tube, where pipe diameter specifications only approximate the pipe inside diameter (ID) for sizes of 12 inch or less, and STD wall thickness.
The IPS number (reference to an OD) is the same as the NPS number, but the schedules were limited to Standard Wall (STD), Extra Strong, (XS) and Double Extra Strong (XXS). STD is identical to Schedule 40 for NPS 1/8 to NPS 10, inclusive, and indicates .375" wall thickness for NPS 12 and larger. XS is identical to SCH 80 for NPS 1/8 to NPS 8, inclusive, and indicates .500" wall thickness for NPS 8 and larger. Different definitions exist for XXS, but it is generally thicker than schedule 160.
See also
Nominal Pipe Sizes
Pipe sizes
Standard dimension ratio
References
Mechanical standards
Piping | Iron pipe size | [
"Chemistry",
"Engineering"
] | 421 | [
"Mechanical standards",
"Building engineering",
"Chemical engineering",
"Mechanical engineering",
"Piping"
] |
5,351,223 | https://en.wikipedia.org/wiki/Pinch%20%28plasma%20physics%29 | A pinch (or: Bennett pinch (after Willard Harrison Bennett), electromagnetic pinch, magnetic pinch, pinch effect, or plasma pinch.) is the compression of an electrically conducting filament by magnetic forces, or a device that does such. The conductor is usually a plasma, but could also be a solid or liquid metal. Pinches were the first type of device used for experiments in controlled nuclear fusion power.
Pinches occur naturally in electrical discharges such as lightning bolts, planetary auroras, current sheets, and solar flares.
Basic mechanism
Types
Pinches exist in nature and in laboratories. Pinches differ in their geometry and operating forces. These include:
Uncontrolled – Any time an electric current moves in large amounts (e.g., lightning, arcs, sparks, discharges) a magnetic force can pull together plasma. This can be insufficient for fusion.
Sheet pinch – An astrophysical effect, this arises from vast sheets of charged particles.
Z-pinch – The current runs down the axis, or walls, of a cylinder while the magnetic field is azimuthal
Theta pinch – The magnetic field runs down the axis of a cylinder, while the electric field is in the azimuthal direction (also called a thetatron)
Screw pinch – A combination of a Z-pinch and theta pinch (also called a stabilized Z-pinch, or θ-Z pinch)
Reversed field pinch or toroidal pinch – This is a Z-pinch arranged in the shape of a torus. The plasma has an internal magnetic field. As distance increases from the center of this ring, the magnetic field reverses direction.
Inverse pinch – An early fusion concept, this device consisted of a rod surrounded by plasma. Current traveled through the plasma and returned along the center rod. This geometry was slightly different than a z-pinch in that the conductor was in the center, not the sides.
Cylindrical pinch
Orthogonal pinch effect
Ware pinch – A pinch that occurs inside a Tokamak plasma, when particles inside the banana orbit condense together.
Magnetized liner inertial fusion (MagLIF) – A Z-pinch of preheated, premagnetized fuel inside a metal liner, which could lead to ignition and practical fusion energy with a larger pulsed-power driver.
Common behavior
Pinches may become unstable. They radiate energy across the whole electromagnetic spectrum including radio waves, microwaves, infrared, x-rays, gamma rays, synchrotron radiation, and visible light. They also produce neutrons, as a product of fusion.
Applications and devices
Pinches are used to generate X-rays and the intense magnetic fields generated are used in electromagnetic forming of metals. They also have applications in particle beams including particle beam weapons, astrophysics studies and it has been proposed to use them in space propulsion. A number of large pinch machines have been built to study fusion power; here are several:
MAGPIE A Z-pinch at Imperial College. This dumps a large amount of current across a wire. Under these conditions, the wire becomes plasma and compresses to produce fusion.
Z Pulsed Power Facility at Sandia National Laboratories.
ZETA device in Culham, England
Madison Symmetric Torus at the University of Wisconsin, Madison
Reversed-Field eXperiment in Italy.
Dense plasma focus in New Jersey
University of Nevada, Reno (USA)
Cornell University (USA)
University of Michigan (USA)
University of California, San Diego (USA)
University of Washington (USA)
Ruhr University (Germany)
École Polytechnique (France)
Weizmann Institute of Science (Israel)
Universidad Autónoma Metropolitana (Mexico).
Zap Energy Inc. (USA)
Crushing cans with the pinch effect
Many high-voltage electronics enthusiasts make their own crude electromagnetic forming devices. They use pulsed power techniques to produce a theta pinch able to crush an aluminium soft drink can using the Lorentz forces created when large currents are induced in the can by the strong magnetic field of the primary coil.
An electromagnetic aluminium can crusher consists of four main components: a high-voltage DC power supply, which provides a source of electrical energy, a large energy discharge capacitor to accumulate the electrical energy, a high voltage switch or spark gap, and a robust coil (capable of surviving high magnetic pressure) through which the stored electrical energy can be quickly discharged in order to generate a correspondingly strong pinching magnetic field (see diagram below).
In practice, such a device is somewhat more sophisticated than the schematic diagram suggests, including electrical components that control the current in order to maximize the resulting pinch, and to ensure that the device works safely. For more details, see the notes.
History
The first creation of a Z-pinch in the laboratory may have occurred in 1790 in Holland when Martinus van Marum created an explosion by discharging 100 Leyden jars into a wire. The phenomenon was not understood until 1905, when Pollock and Barraclough investigated a compressed and distorted length of copper tube from a lightning rod after it had been struck by lightning. Their analysis showed that the forces due to the interaction of the large current flow with its own magnetic field could have caused the compression and distortion. A similar, and apparently independent, theoretical analysis of the pinch effect in liquid metals was published by Northrup in 1907. The next major development was the publication in 1934 of an analysis of the radial pressure balance in a static Z-pinch by Bennett (see the following section for details).
Thereafter, the experimental and theoretical progress on pinches was driven by fusion power research. In their article on the "Wire-array Z-pinch: a powerful x-ray source for ICF", M G Haines et al., wrote on the "Early history of Z-pinches".
In 1946 Thompson and Blackman submitted a patent for a fusion reactor based on a toroidal Z-pinch with an additional vertical magnetic field. But in 1954 Kruskal and Schwarzschild published their theory of MHD instabilities in a Z-pinch. In 1956, Kurchatov gave his famous Harwell lecture showing nonthermal neutrons and the presence of m = 0 and m = 1 instabilities in a deuterium pinch. In 1957 Pease and Braginskii independently predicted radiative collapse in a Z-pinch under pressure balance when in hydrogen the current exceeds 1.4 MA. (The viscous rather than resistive dissipation of magnetic energy discussed above and in would however prevent radiative collapse).
In 1958, the world's first controlled thermonuclear fusion experiment was accomplished using a theta-pinch machine named Scylla I at the Los Alamos National Laboratory. A cylinder full of deuterium was converted into a plasma and compressed to 15 million degrees Celsius under a theta-pinch effect. Lastly, at Imperial College in 1960, led by R Latham, the Plateau–Rayleigh instability was shown, and its growth rate measured in a dynamic Z-pinch.
Equilibrium analysis
One dimension
In plasma physics three pinch geometries are commonly studied: the θ-pinch, the Z-pinch, and the screw pinch. These are cylindrically shaped. The cylinder is symmetric in the axial (z) direction and the azimuthal (θ) directions. The one-dimensional pinches are named for the direction the current travels.
The θ-pinch
The θ-pinch has a magnetic field directed in the z direction and a large diamagnetic current directed in the θ direction. Using Ampère's circuital law (discarding the displacement term)
Since B is only a function of r we can simplify this to
So J points in the θ direction.
Thus, the equilibrium condition () for the θ-pinch reads:
θ-pinches tend to be resistant to plasma instabilities; This is due in part to Alfvén's theorem (also known as the frozen-in flux theorem).
The Z-pinch
The Z-pinch has a magnetic field in the θ direction and a current J flowing in the z direction. Again, by electrostatic Ampère's law,
Thus, the equilibrium condition, , for the Z-pinch reads:
Although Z-pinches satisfy the MHD equilibrium condition, it is important to note that this is an unstable equilibrium, resulting in various instabilies such as the m = 0 instability ('sausage'), m = 1 instability ('kink'), and various other higher order instabilities.
The screw pinch
The screw pinch is an effort to combine the stability aspects of the θ-pinch and the confinement aspects of the Z-pinch. Referring once again to Ampère's law,
But this time, the B field has a θ component and a z component
So this time J has a component in the z direction and a component in the θ direction.
Finally, the equilibrium condition () for the screw pinch reads:
The screw pinch via colliding optical vortices
The screw pinch might be produced in laser plasma by colliding optical vortices of ultrashort duration. For this purpose optical vortices should be phase-conjugated.
The magnetic field distribution is given here again via Ampère's law:
Two dimensions
A common problem with one-dimensional pinches is the end losses. Most of the motion of particles is along the magnetic field. With the θ-pinch and the screw-pinch, this leads particles out of the end of the machine very quickly, leading to a loss of mass and energy. Along with this problem, the Z-pinch has major stability problems. Though particles can be reflected to some extent with magnetic mirrors, even these allow many particles to pass. A common method of beating these end losses, is to bend the cylinder around into a torus. Unfortunately this breaks θ symmetry, as paths on the inner portion (inboard side) of the torus are shorter than similar paths on the outer portion (outboard side). Thus, a new theory is needed. This gives rise to the famous Grad–Shafranov equation. Numerical solutions to the Grad–Shafranov equation have also yielded some equilibria, most notably that of the reversed field pinch.
Three dimensions
, there is no coherent analytical theory for three-dimensional equilibria. The general approach to finding such equilibria is to solve the vacuum ideal MHD equations. Numerical solutions have yielded designs for stellarators. Some machines take advantage of simplification techniques such as helical symmetry (for example University of Wisconsin's Helically Symmetric eXperiment). However, for an arbitrary three-dimensional configuration, an equilibrium relation, similar to that of the 1-D configurations exists:
Where κ is the curvature vector defined as:
with b the unit vector tangent to B.
Formal treatment
The Bennett relation
Consider a cylindrical column of fully ionized quasineutral plasma, with an axial electric field, producing an axial current density, j, and associated azimuthal magnetic field, B. As the current flows through its own magnetic field, a pinch is generated with an inward radial force density of j x B. In a steady state with forces balancing:
where ∇p is the magnetic pressure gradient, and pe and pi are the electron and ion pressures, respectively. Then using Maxwell's equation and the ideal gas law , we derive:
(the Bennett relation)
where N is the number of electrons per unit length along the axis, Te and Ti are the electron and ion temperatures, I is the total beam current, and k is the Boltzmann constant.
The generalized Bennett relation
The generalized Bennett relation considers a current-carrying magnetic-field-aligned cylindrical plasma pinch undergoing rotation at angular frequency ω. Along the axis of the plasma cylinder flows a current density jz, resulting in an azimuthal magnetic field Βφ. Originally derived by Witalis, the generalized Bennett relation results in:
where a current-carrying, magnetic-field-aligned cylindrical plasma has a radius a,
J0 is the total moment of inertia with respect to the z axis,
W⊥kin is the kinetic energy per unit length due to beam motion transverse to the beam axis
WBz is the self-consistent Bz energy per unit length
WEz is the self-consistent Ez energy per unit length
Wk is thermokinetic energy per unit length
I(a) is the axial current inside the radius a (r in diagram)
N(a) is the total number of particles per unit length
Er is the radial electric field
Eφ is the rotational electric field
The positive terms in the equation are expansional forces while the negative terms represent beam compressional forces.
The Carlqvist relation
The Carlqvist relation, published by Per Carlqvist in 1988, is a specialization of the generalized Bennett relation (above), for the case that the kinetic pressure is much smaller at the border of the pinch than in the inner parts. It takes the form
and is applicable to many space plasmas.
The Carlqvist relation can be illustrated (see right), showing the total current (I) versus the number of particles per unit length (N) in a Bennett pinch. The chart illustrates four physically distinct regions. The plasma temperature is quite cold (Ti = Te = Tn = 20 K), containing mainly hydrogen with a mean particle mass 3×10−27 kg. The thermokinetic energy Wk >> πa2 pk(a). The curves, ΔWBz show different amounts of excess magnetic energy per unit length due to the axial magnetic field Bz. The plasma is assumed to be non-rotational, and the kinetic pressure at the edges is much smaller than inside.
Chart regions: (a) In the top-left region, the pinching force dominates. (b) Towards the bottom, outward kinetic pressures balance inwards magnetic pressure, and the total pressure is constant. (c) To the right of the vertical line ΔWBz = 0, the magnetic pressures balances the gravitational pressure, and the pinching force is negligible. (d) To the left of the sloping curve ΔWBz = 0, the gravitational force is negligible. Note that the chart shows a special case of the Carlqvist relation, and if it is replaced by the more general Bennett relation, then the designated regions of the chart are not valid.
Carlqvist further notes that by using the relations above, and a derivative, it is possible to describe the Bennett pinch, the Jeans criterion (for gravitational instability, in one and two dimensions), force-free magnetic fields, gravitationally balanced magnetic pressures, and continuous transitions between these states.
References in culture
A fictionalized pinch-generating device was used in Ocean's Eleven, where it was used to disrupt Las Vegas's power grid just long enough for the characters to begin their heist.
See also
Electromagnetic forming
Explosively pumped flux compression generator
Fusion power
Madison Symmetric Torus (reversed field pinch)
References
External links
Examples of electromagnetically shrunken coins and crushed cans
Theory of electromagnetic coin shrinking
The Known History of "Quarter Shrinking"
Can crushing info using electromagnetism among other things
The MAGPIE project at Imperial College London is used to study wire array Z-pinch implosions.
Fusion power
Plasma phenomena
Dutch inventions | Pinch (plasma physics) | [
"Physics",
"Chemistry"
] | 3,116 | [
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Fusion power",
"Nuclear fusion"
] |
5,351,436 | https://en.wikipedia.org/wiki/Turbulence%20kinetic%20energy | In fluid dynamics, turbulence kinetic energy (TKE) is the mean kinetic energy per unit mass associated with eddies in turbulent flow. Physically, the turbulence kinetic energy is characterized by measured root-mean-square (RMS) velocity fluctuations. In the Reynolds-averaged Navier Stokes equations, the turbulence kinetic energy can be calculated based on the closure method, i.e. a turbulence model.
The TKE can be defined to be half the sum of the variances σ² (square of standard deviations σ) of the fluctuating velocity components:
where each turbulent velocity component is the difference between the instantaneous and the average velocity: (Reynolds decomposition). The mean and variance are respectively.
TKE can be produced by fluid shear, friction or buoyancy, or through external forcing at low-frequency eddy scales (integral scale). Turbulence kinetic energy is then transferred down the turbulence energy cascade, and is dissipated by viscous forces at the Kolmogorov scale. This process of production, transport and dissipation can be expressed as:
where:
is the mean-flow material derivative of TKE;
is the turbulence transport of TKE;
is the production of TKE, and
is the TKE dissipation.
Assuming that molecular viscosity is constant, and making the Boussinesq approximation, the TKE equation is:
By examining these phenomena, the turbulence kinetic energy budget for a particular flow can be found.
Computational fluid dynamics
In computational fluid dynamics (CFD), it is impossible to numerically simulate turbulence without discretizing the flow-field as far as the Kolmogorov microscales, which is called direct numerical simulation (DNS). Because DNS simulations are exorbitantly expensive due to memory, computational and storage overheads, turbulence models are used to simulate the effects of turbulence. A variety of models are used, but generally TKE is a fundamental flow property which must be calculated in order for fluid turbulence to be modelled.
Reynolds-averaged Navier–Stokes equations
Reynolds-averaged Navier–Stokes (RANS) simulations use the Boussinesq eddy viscosity hypothesis to calculate the Reynolds stress that results from the averaging procedure:
where
The exact method of resolving TKE depends upon the turbulence model used; – (k–epsilon) models assume isotropy of turbulence whereby the normal stresses are equal:
This assumption makes modelling of turbulence quantities ( and ) simpler, but will not be accurate in scenarios where anisotropic behaviour of turbulence stresses dominates, and the implications of this in the production of turbulence also leads to over-prediction since the production depends on the mean rate of strain, and not the difference between the normal stresses (as they are, by assumption, equal).
Reynolds-stress models (RSM) use a different method to close the Reynolds stresses, whereby the normal stresses are not assumed isotropic, so the issue with TKE production is avoided.
Initial conditions
Accurate prescription of TKE as initial conditions in CFD simulations are important to accurately predict flows, especially in high Reynolds-number simulations. A smooth duct example is given below.
where is the initial turbulence intensity [%] given below, and is the initial velocity magnitude. As an example for pipe flows, with the Reynolds number based on the pipe diameter:
Here is the turbulence or eddy length scale, given below, and is a – model parameter whose value is typically given as 0.09;
The turbulent length scale can be estimated as
with a characteristic length. For internal flows this may take the value of the inlet duct (or pipe) width (or diameter) or the hydraulic diameter.
References
Further reading
Turbulence kinetic energy at CFD Online.
Lacey, R. W. J.; Neary, V. S.; Liao, J. C.; Enders, E. C.; Tritico, H. M. (2012). "The IPOS framework: linking fish swimming performance in altered flows from laboratory experiments to rivers." River Res. Applic. 28 (4), pp. 429–443. doi:10.1002/rra.1584.
Wilcox, D. C. (2006). "Turbulence modeling for CFD". Third edition. DCW Industries, La Canada, USA. ISBN 978-1-928729-08-2.
Computational fluid dynamics
Turbulence
Energy (physics) | Turbulence kinetic energy | [
"Physics",
"Chemistry",
"Mathematics"
] | 900 | [
"Turbulence",
"Physical quantities",
"Computational fluid dynamics",
"Quantity",
"Computational physics",
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
5,353,216 | https://en.wikipedia.org/wiki/Differential%20variational%20inequality | In mathematics, a differential variational inequality (DVI) is a dynamical system that incorporates ordinary differential equations and variational inequalities or complementarity problems.
DVIs are useful for representing models involving both dynamics and inequality constraints. Examples of such problems include, for example, mechanical impact problems, electrical circuits with ideal diodes, Coulomb friction problems for contacting bodies, and dynamic economic and related problems such as dynamic traffic networks and networks of queues (where the constraints can either be upper limits on queue length or that the queue length cannot become negative). DVIs are related to a number of other concepts including differential inclusions, projected dynamical systems, evolutionary inequalities, and parabolic variational inequalities.
Differential variational inequalities were first formally introduced by Pang and Stewart, whose definition should not be confused with the differential variational inequality used in Aubin and Cellina (1984).
Differential variational inequalities have the form to find such that
for every and almost all t; K a closed convex set, where
Closely associated with DVIs are dynamic/differential complementarity problems: if K is a closed convex cone, then the variational inequality is equivalent to the complementarity problem:
Examples
Mechanical Contact
Consider a rigid ball of radius falling from a height towards a table. Assume that the forces acting on the ball are gravitation and the contact forces of the table preventing penetration. Then the differential equation describing the motion is
where is the mass of the ball and is the contact force of the table, and is the gravitational acceleration. Note that both and are a priori unknown. While the ball and the table are separated, there is no contact force. There cannot be penetration (for a rigid ball and a rigid table), so for all . If then . On the other hand, if , then can take on any non-negative value. (We do not allow as this corresponds to some kind of adhesive.) This can be summarized by the complementarity relationship
In the above formulation, we can set , so that its dual cone is also the set of non-negative real numbers; this is a differential complementarity problem.
Ideal diodes in electrical circuits
An ideal diode is a diode that conducts electricity in the forward direction with no resistance if a forward voltage is applied, but allows no current to flow in the reverse direction. Then if the reverse voltage is , and the forward current is , then there is a complementarity relationship between the two:
for all . If the diode is in a circuit containing a memory element, such as a capacitor or inductor, then the circuit can be represented as a differential variational inequality.
Index
The concept of the index of a DVI is important and determines many questions of existence and uniqueness of solutions to a DVI. This concept is closely related to the concept of index for differential algebraic equations (DAE's), which is the number of times the algebraic equations of a DAE must be differentiated in order to obtain a complete system of differential equations for all variables. It is also a notion close to the relative degree of Control Theory, which is, roughly speaking, the number of times an "output" variable has to be differentiated so that an "input" variable appears explicitly in Control Theory this is used to derive a canonical state space form which involves the so-called "zero-dynamics", a fundamental concept for control). For a DVI, the index is the number of differentiations of F(t, x, u) = 0 needed in order to locally uniquely identify u as a function of t and x.
This index can be computed for the above examples. For the mechanical impact example, if we differentiate once we have , which does not yet explicitly involve . However, if we differentiate once more, we can use the differential equation to give , which does explicitly involve . Furthermore, if , we can explicitly determine in terms of .
For the ideal diode systems, the computations are considerably more difficult, but provided some generally valid conditions hold, the differential variational inequality can be shown to have index one.
Differential variational inequalities with index greater than two are generally not meaningful, but certain conditions and interpretations can make them meaningful (see the references Acary, Brogliato and Goeleven, and Heemels, Schumacher, and Weiland below). One crucial step is to first define a suitable space of solutions (Schwartz' distributions).
References
Pang and Stewart (2008) "Differential Variational Inequalities", Mathematical Programming, vol. 113, no. 2, Series A, 345–424.
Aubin and Cellina (1984) Differential Inclusions Springer-Verlag.
Acary and Brogliato and Goeleven (2006) "Higher order Moreau's sweeping process. Mathematical formulation and numerical formulation", Mathematical Programming A, 113, 133–217, 2008.
Avi Mandelbaum (1989) "Dynamic Complementarity Problems", unpublished manuscript.
Heemels, Schumacher, and Weiland (2000) "Linear complementarity systems", SIAM Journal on Applied Mathematics, vol. 60, no. 4, 1234–1269.
Dynamical systems | Differential variational inequality | [
"Physics",
"Mathematics"
] | 1,078 | [
"Mechanics",
"Dynamical systems"
] |
5,353,485 | https://en.wikipedia.org/wiki/Thallium%20azide | Thallium azide, , is a yellow-brown crystalline solid poorly soluble in water. Although it is not nearly as sensitive to shock or friction as lead azide, it can easily be detonated by a flame or spark. It can be stored safely dry in a closed non-metallic container.
Preparation and structure
Thallium azide can be prepared treating an aqueous solution of thallium(I) sulfate with sodium azide. Thallium azide will precipitate; the yield can be maximized by cooling.
, , , and adopt the same structures. The azide is bound to eight cations in an eclipsed orientation. The cations are bound to eight terminal N centers.
Safety
All thallium compounds are poisonous and should be handled with care. Azide salts are also roughly as toxic as their corresponding cyanide salts.
References
Thallium(I) compounds
Azides | Thallium azide | [
"Chemistry"
] | 188 | [
"Explosive chemicals",
"Azides"
] |
1,481,059 | https://en.wikipedia.org/wiki/Sesamex | Sesamex, also called sesoxane, is an organic compound used as an adjuvant for synergy; that is, it enhances the potency of pesticides such pyrethrins and pyrethroids, but is itself not a pesticide.
Solubility
Sesamex is soluble in kerosene, freon 11, and freon 12.
References
Insecticides
Benzodioxoles
Phenol ethers | Sesamex | [
"Chemistry"
] | 92 | [] |
1,481,906 | https://en.wikipedia.org/wiki/14-3-3%20protein | 14-3-3 proteins are a family of conserved regulatory molecules that are expressed in all eukaryotic cells. 14-3-3 proteins have the ability to bind a multitude of functionally diverse signaling proteins, including kinases, phosphatases, and transmembrane receptors. More than 200 signaling proteins have been reported as 14-3-3 ligands.
Elevated amounts of 14-3-3 protein in cerebrospinal fluid are usually a sign of rapid neurodegeneration; a common indicator of Creutzfeldt–Jakob disease.
Properties
Seven genes encode seven distinct 14-3-3 proteins in most mammals (See Human genes below) and 13-15 genes in many higher plants, though typically in fungi they are present only in pairs. Protists have at least one. Eukaryotes can tolerate the loss of a single 14-3-3 gene if multiple genes are expressed, but deletion of all 14-3-3s (as experimentally determined in yeast) results in death.
14-3-3 proteins are structurally similar to the Tetratrico Peptide Repeat (TPR) superfamily, which generally have 9 or 10 alpha helices, and usually form homo- and/or hetero-dimer interactions along their amino-termini helices. These proteins contain a number of known common modification domains, including regions for divalent cation interaction, phosphorylation & acetylation, and proteolytic cleavage, among others established and predicted.
14-3-3 binds to peptides. There are common recognition motifs for 14-3-3 proteins that contain a phosphorylated serine or threonine residue, although binding to non-phosphorylated ligands has also been reported. This interaction occurs along a so-called binding groove or cleft that is amphipathic in nature. To date, the crystal structures of six classes of these proteins have been resolved and deposited in the public domain.
Discovery and naming
14-3-3 proteins were initially found in brain tissue in 1967 and purified using chromatography and gel electrophoresis. In bovine brain samples, 14-3-3 proteins were located in the 14th fraction eluting from a DEAE-cellulose column and in position 3.3 on a starch electrophoresis gel.
Function
14-3-3 proteins play an isoform-specific role in class switch recombination. They are believed to interact with the protein Activation-Induced (Cytidine) Deaminase in mediating class switch recombination.
Phosphorylation of Cdc25C by CDS1 and CHEK1 creates a binding site for the 14-3-3 family of phosphoserine binding proteins. Binding of 14-3-3 has little effect on Cdc25C activity, and it is believed that 14-3-3 regulates Cdc25C by sequestering it to the cytoplasm, thereby preventing the interactions with CycB-Cdk1 that are localized to the nucleus at the G2/M transition.
The eta (YWHAH) isoform is reported to be a biomarker (in synovial fluid) for rheumatoid arthritis. In a systematic review, 14-3-3η has been described as a welcome addition to the rheumatology field. The authors indicate that the serum based 14-3-3η marker is additive to the armamentarium of existing tools available to clinicians, and that there is adequate clinical evidence to support its clinical benefits in the management of patients diagnosed with rheumatoid arthritis (RA).
14-3-3 proteins bind to and sequester the transcriptional coregulators YAP/TAZ to the cytoplasm, inhibiting their function.
14-3-3 regulating cell-signalling
Raf-1
Bad – see Bcl-2
Bax
Cdc25
Akt
SOS1 – see RSK
Human genes
– "14-3-3 beta"
– "14-3-3 epsilon"
– "14-3-3 gamma"
– "14-3-3 eta"
– "14-3-3 theta"
– "14-3-3 zeta"
or – "14-3-3 sigma" (Stratifin)
The 14-3-3 proteins alpha and delta (YWHAA and YWHAD) are phosphorylated forms of YWHAB and YWHAZ, respectively.
In plants
The presence of large gene families of 14-3-3 proteins in the Viridiplantae kingdom reflects their essential role in plant physiology.
A phylogenetic analysis of 27 plant species clustered the 14-3-3 proteins into four groups.
14-3-3 proteins activate the auto-inhibited plasma membrane P-type H+ ATPases. They bind the ATPases' C-terminus at a conserved threonine.
References
Further reading
External links
Three-dimensional structure of 14-3-3 Protein Theta (Human) complexed with a peptide in the PDB.
Drosophila 14-3-3epsilon - The Interactive Fly
Drosophila 14-3-3zeta - The Interactive Fly
Programmed cell death
Protein families
14-3-3 proteins | 14-3-3 protein | [
"Chemistry",
"Biology"
] | 1,105 | [
"Protein classification",
"Signal transduction",
"Senescence",
"Protein families",
"Programmed cell death"
] |
1,482,061 | https://en.wikipedia.org/wiki/Bekenstein%20bound | In physics, the Bekenstein bound (named after Jacob Bekenstein) is an upper limit on the thermodynamic entropy S, or Shannon entropy H, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level. It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite.
Equations
The universal form of the bound was originally found by Jacob Bekenstein in 1981 as the inequality
where S is the entropy, k is the Boltzmann constant, R is the radius of a sphere that can enclose the given system, E is the total mass–energy including any rest masses, ħ is the reduced Planck constant, and c is the speed of light. Note that while gravity plays a significant role in its enforcement, the expression for the bound does not contain the gravitational constant G, and so, it ought to apply to quantum field theory in curved spacetime.
The Bekenstein–Hawking boundary entropy of three-dimensional black holes exactly saturates the bound. The Schwarzschild radius is given by
and so the two-dimensional area of the black hole's event horizon is
and using the Planck length
the Bekenstein–Hawking entropy is
One interpretation of the bound makes use of the microcanonical formula for entropy,
where is the number of energy eigenstates accessible to the system. This is equivalent to saying that the dimension of the Hilbert space describing the system is
The bound is closely associated with black hole thermodynamics, the holographic principle and the covariant entropy bound of quantum gravity, and can be derived from a conjectured strong form of the latter.
Origins
Bekenstein derived the bound from heuristic arguments involving black holes. If a system exists that violates the bound, i.e., by having too much entropy, Bekenstein argued that it would be possible to violate the second law of thermodynamics by lowering it into a black hole. In 1995, Ted Jacobson demonstrated that the Einstein field equations (i.e., general relativity) can be derived by assuming that the Bekenstein bound and the laws of thermodynamics are true. However, while a number of arguments were devised which show that some form of the bound must exist in order for the laws of thermodynamics and general relativity to be mutually consistent, the precise formulation of the bound was a matter of debate until Casini's work in 2008.
The following is a heuristic derivation that shows for some constant . Showing that requires a more technical analysis.
Suppose we have a black hole of mass , then the Schwarzschild radius of the black hole is , and the Bekenstein–Hawking entropy of the black hole is .
Now take a box of energy , entropy , and side length . If we throw the box into the black hole, the mass of the black hole goes up to , and the entropy goes up by . Since entropy does not decrease, .
In order for the box to fit inside the black hole, . If the two are comparable, , then we have derived the BH bound: .
Proof in quantum field theory
A proof of the Bekenstein bound in the framework of quantum field theory was given in 2008 by Casini. One of the crucial insights of the proof was to find a proper interpretation of the quantities appearing on both sides of the bound.
Naive definitions of entropy and energy density in Quantum Field Theory suffer from ultraviolet divergences. In the case of the Bekenstein bound, ultraviolet divergences can be avoided by taking differences between quantities computed in an excited state and the same quantities computed in the vacuum state. For example, given a spatial region , Casini defines the entropy on the left-hand side of the Bekenstein bound as
where is the Von Neumann entropy of the reduced density matrix associated with in the excited state , and is the corresponding Von Neumann entropy for the vacuum state .
On the right-hand side of the Bekenstein bound, a difficult point is to give a rigorous interpretation of the quantity , where is a characteristic length scale of the system and is a characteristic energy. This product has the same units as the generator of a Lorentz boost, and the natural analog of a boost in this situation is the modular Hamiltonian of the vacuum state . Casini defines the right-hand side of the Bekenstein bound as the difference between the expectation value of the modular Hamiltonian in the excited state and the vacuum state,
With these definitions, the bound reads
which can be rearranged to give
This is simply the statement of positivity of quantum relative entropy, which proves the Bekenstein bound.
However, the modular Hamiltonian can only be interpreted as a weighted form of energy for conformal field theories, and when V is a sphere.
This construction allows us to make sense of the Casimir effect where the localized energy density is lower than that of the vacuum, i.e. a negative localized energy. The localized entropy of the vacuum is nonzero, and so, the Casimir effect is possible for states with a lower localized entropy than that of the vacuum. Hawking radiation can be explained by dumping localized negative energy into a black hole.
See also
Margolus–Levitin theorem
Landauer's principle
Bremermann's limit
Kolmogorov complexity
Beyond black holes
Digital physics
Limits of computation
Chandrasekhar limit
References
External links
Jacob D. Bekenstein, "Bekenstein-Hawking entropy", Scholarpedia, Vol. 3, No. 10 (2008), p. 7375, .
Jacob D. Bekenstein's website at the Racah Institute of Physics, Hebrew University of Jerusalem, which contains a number of articles on the Bekenstein bound.
Limits of computation
Thermodynamic entropy
Quantum information science
Black holes | Bekenstein bound | [
"Physics",
"Astronomy"
] | 1,237 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Statistical mechanics",
"Unsolved problems in physics",
"Astrophysics",
"Thermodynamic entropy",
"Entropy",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Limits of computation"
] |
1,482,218 | https://en.wikipedia.org/wiki/Total%20relation | In mathematics, a binary relation R ⊆ X×Y between two sets X and Y is total (or left total) if the source set X equals the domain {x : there is a y with xRy }. Conversely, R is called right total if Y equals the range {y : there is an x with xRy }.
When f: X → Y is a function, the domain of f is all of X, hence f is a total relation. On the other hand, if f is a partial function, then the domain may be a proper subset of X, in which case f is not a total relation.
"A binary relation is said to be total with respect to a universe of discourse just in case everything in that universe of discourse stands in that relation to something else."
Algebraic characterization
Total relations can be characterized algebraically by equalities and inequalities involving compositions of relations. To this end, let be two sets, and let For any two sets let be the universal relation between and and let be the identity relation on We use the notation for the converse relation of
is total iff for any set and any implies
is total iff
If is total, then The converse is true if
If is total, then The converse is true if
If is total, then The converse is true if
More generally, if is total, then for any set and any The converse is true if
See also
Serial relation — a total homogeneous relation
Notes
References
Gunther Schmidt & Michael Winter (2018) Relational Topology
C. Brink, W. Kahl, and G. Schmidt (1997) Relational Methods in Computer Science, Advances in Computer Science, page 5,
Gunther Schmidt & Thomas Strohlein (2012)[1987]
Gunther Schmidt (2011)
Properties of binary relations | Total relation | [
"Mathematics"
] | 359 | [
"Properties of binary relations",
"Mathematical relations",
"Binary relations"
] |
1,482,326 | https://en.wikipedia.org/wiki/Term%20symbol | In atomic physics, a term symbol is an abbreviated description of the total spin and orbital angular momentum quantum numbers of the electrons in a multi-electron atom. So while the word symbol suggests otherwise, it represents an actual value of a physical quantity.
For a given electron configuration of an atom, its state depends also on its total angular momentum, including spin and orbital components, which are specified by the term symbol. The usual atomic term symbols assume LS coupling (also known as Russell–Saunders coupling) in which the all-electron total quantum numbers for orbital (L), spin (S) and total (J) angular momenta are good quantum numbers.
In the terminology of atomic spectroscopy, L and S together specify a term; L, S, and J specify a level; and L, S, J and the magnetic quantum number MJ specify a state. The conventional term symbol has the form 2S+1LJ, where J is written optionally in order to specify a level. L is written using spectroscopic notation: for example, it is written "S", "P", "D", or "F" to represent L = 0, 1, 2, or 3 respectively. For coupling schemes other that LS coupling, such as the jj coupling that applies to some heavy elements, other notations are used to specify the term.
Term symbols apply to both neutral and charged atoms, and to their ground and excited states. Term symbols usually specify the total for all electrons in an atom, but are sometimes used to describe electrons in a given subshell or set of subshells, for example to describe each open subshell in an atom having more than one. The ground state term symbol for neutral atoms is described, in most cases, by Hund's rules. Neutral atoms of the chemical elements have the same term symbol for each column in the s-block and p-block elements, but differ in d-block and f-block elements where the ground-state electron configuration changes within a column, where exceptions to Hund's rules occur. Ground state term symbols for the chemical elements are given below.
Term symbols are also used to describe angular momentum quantum numbers for atomic nuclei and for molecules. For molecular term symbols, Greek letters are used to designate the component of orbital angular momenta along the molecular axis.
The use of the word term for an atom's electronic state is based on the Rydberg–Ritz combination principle, an empirical observation that the wavenumbers of spectral lines can be expressed as the difference of two terms. This was later summarized by the Bohr model, which identified the terms with quantized energy levels, and the spectral wavenumbers of these levels with photon energies.
Tables of atomic energy levels identified by their term symbols are available for atoms and ions in ground and excited states from the National Institute of Standards and Technology (NIST).
Term symbols with LS coupling
The usual atomic term symbols assume LS coupling (also known as Russell–Saunders coupling), in which the atom's total spin quantum number S and the total orbital angular momentum quantum number L are "good quantum numbers". (Russell–Saunders coupling is named after Henry Norris Russell and Frederick Albert Saunders, who described it in 1925). The spin-orbit interaction then couples the total spin and orbital moments to give the total electronic angular momentum quantum number J. Atomic states are then well described by term symbols of the form:
where
S is the total spin quantum number for the atom's electrons. The value 2S + 1 written in the term symbol is the spin multiplicity, which is the number of possible values of the spin magnetic quantum number MS for a given spin S.
J is the total angular momentum quantum number for the atom's electrons. J has a value in the range from |L − S| to L + S.
L is the total orbital quantum number in spectroscopic notation, in which the symbols for L are:
The orbital symbols S, P, D and F are derived from the characteristics of the spectroscopic lines corresponding to s, p, d, and f orbitals: sharp, principal, diffuse, and fundamental; the rest are named in alphabetical order from G onwards (omitting J, S and P). When used to describe electronic states of an atom, the term symbol is often written following the electron configuration. For example, 1s22s22p2 3P0 represents the ground state of a neutral carbon atom. The superscript 3 indicates that the spin multiplicity 2S + 1 is 3 (it is a triplet state), so S = 1; the letter "P" is spectroscopic notation for L = 1; and the subscript 0 is the value of J (in this case J = L − S).
Small letters refer to individual orbitals or one-electron quantum numbers, whereas capital letters refer to many-electron states or their quantum numbers.
Terminology: terms, levels, and states
For a given electron configuration,
The combination of an value and an value is called a term, and has a statistical weight (i.e., number of possible states) equal to ;
A combination of , and is called a level. A given level has a statistical weight of , which is the number of possible states associated with this level in the corresponding term;
A combination of , , and determines a single state.
The product as a number of possible states with given S and L is also a number of basis states in the uncoupled representation, where , , , ( and are z-axis components of total spin and total orbital angular momentum respectively) are good quantum numbers whose corresponding operators mutually commute. With given and , the eigenstates in this representation span function space of dimension , as and . In the coupled representation where total angular momentum (spin + orbital) is treated, the associated states (or eigenstates) are and these states span the function space with dimension of
as . Obviously, the dimension of function space in both representations must be the same.
As an example, for , there are different states (= eigenstates in the uncoupled representation) corresponding to the 3D term, of which belong to the 3D3 (J = 3) level. The sum of for all levels in the same term equals (2S+1)(2L+1) as the dimensions of both representations must be equal as described above. In this case, J can be 1, 2, or 3, so 3 + 5 + 7 = 15.
Term symbol parity
The parity of a term symbol is calculated as
where is the orbital quantum number for each electron. means even parity while is for odd parity. In fact, only electrons in odd orbitals (with odd) contribute to the total parity: an odd number of electrons in odd orbitals (those with an odd such as in p, f,...) correspond to an odd term symbol, while an even number of electrons in odd orbitals correspond to an even term symbol. The number of electrons in even orbitals is irrelevant as any sum of even numbers is even. For any closed subshell, the number of electrons is which is even, so the summation of in closed subshells is always an even number. The summation of quantum numbers over open (unfilled) subshells of odd orbitals ( odd) determines the parity of the term symbol. If the number of electrons in this reduced summation is odd (even) then the parity is also odd (even).
When it is odd, the parity of the term symbol is indicated by a superscript letter "o", otherwise it is omitted:
Alternatively, parity may be indicated with a subscript letter "g" or "u", standing for gerade (German for "even") or ungerade ("odd"):
Ground state term symbol
It is relatively easy to predict the term symbol for the ground state of an atom using Hund's rules. It corresponds to a state with maximum S and L.
Start with the most stable electron configuration. Full shells and subshells do not contribute to the overall angular momentum, so they are discarded.
If all shells and subshells are full then the term symbol is 1S0.
Distribute the electrons in the available orbitals, following the Pauli exclusion principle.
Conventionally, put 1 electron into orbital with highest and then continue filling other orbitals in descending order with one electron each, until you are out of electrons, or all orbitals in the subshell have one electron. Assign, again conventionally, all these electrons a value + of quantum magnetic spin number .
If there are remaining electrons, put them in orbitals in the same order as before, but now assigning to them.
The overall S is calculated by adding the ms values for each electron. The overall S is then times the number of unpaired electrons.
The overall L is calculated by adding the values for each electron (so if there are two electrons in the same orbital, add twice that orbital's ).
Calculate J as
if less than half of the subshell is occupied, take the minimum value ;
if more than half-filled, take the maximum value ;
if the subshell is half-filled, then L will be 0, so .
As an example, in the case of fluorine, the electronic configuration is 1s22s22p5.
Discard the full subshells and keep the 2p5 part. So there are five electrons to place in subshell p ().
There are three orbitals () that can hold up to . The first three electrons can take but the Pauli exclusion principle forces the next two to have because they go to already occupied orbitals.
<li> ; , which is "P" in spectroscopic notation.
As fluorine 2p subshell is more than half filled, . Its ground state term symbol is then .
Atomic term symbols of the chemical elements
In the periodic table, because atoms of elements in a column usually have the same outer electron structure, and always have the same electron structure in the "s-block" and "p-block" elements (see block (periodic table)), all elements may share the same ground state term symbol for the column. Thus, hydrogen and the alkali metals are all 2S, the alkaline earth metals are 1S0, the boron column elements are 2P, the carbon column elements are 3P0, the pnictogens are 4S, the chalcogens are 3P2, the halogens are 2P, and the inert gases are 1S0, per the rule for full shells and subshells stated above.
Term symbols for the ground states of most chemical elements are given in the collapsed table below. In the d-block and f-block, the term symbols are not always the same for elements in the same column of the periodic table, because open shells of several d or f electrons have several closely spaced terms whose energy ordering is often perturbed by the addition of an extra complete shell to form the next element in the column.
For example, the table shows that the first pair of vertically adjacent atoms with different ground-state term symbols are V and Nb. The 6D ground state of Nb corresponds to an excited state of V 2112 cm−1 above the 4F ground state of V, which in turn corresponds to an excited state of Nb 1143 cm−1 above the Nb ground state. These energy differences are small compared to the 15158 cm−1 difference between the ground and first excited state of Ca, which is the last element before V with no d electrons.
Term symbols for an electron configuration
The process to calculate all possible term symbols for a given electron configuration is somewhat longer.
First, the total number of possible states is calculated for a given electron configuration. As before, the filled (sub)shells are discarded, and only the partially filled ones are kept. For a given orbital quantum number , is the maximum allowed number of electrons, . If there are electrons in a given subshell, the number of possible states is
As an example, consider the carbon electron structure: 1s22s22p2. After removing full subshells, there are 2 electrons in a p-level (), so there are
different states.
Second, all possible states are drawn. ML and MS for each state are calculated, with where mi is either or for the i-th electron, and M represents the resulting ML or MS respectively:
Third, the number of states for each (ML,MS) possible combination is counted:
Fourth, smaller tables can be extracted representing each possible term. Each table will have the size (2L+1) by (2S+1), and will contain only "1"s as entries. The first table extracted corresponds to ML ranging from −2 to +2 (so ), with a single value for MS (implying ). This corresponds to a 1D term. The remaining terms fit inside the middle 3×3 portion of the table above. Then a second table can be extracted, removing the entries for ML and MS both ranging from −1 to +1 (and so , a 3P term). The remaining table is a 1×1 table, with , i.e., a 1S term.
Fifth, applying Hund's rules, the ground state can be identified (or the lowest state for the configuration of interest). Hund's rules should not be used to predict the order of states other than the lowest for a given configuration. (See examples at .)
If only two equivalent electrons are involved, there is an "Even Rule" which states that, for two equivalent electrons, the only states that are allowed are those for which the sum (L + S) is even.
Case of three equivalent electrons
Alternative method using group theory
For configurations with at most two electrons (or holes) per subshell, an alternative and much quicker method of arriving at the same result can be obtained from group theory. The configuration 2p2 has the symmetry of the following direct product in the full rotation group:
which, using the familiar labels , and , can be written as
The square brackets enclose the anti-symmetric square. Hence the 2p2 configuration has components with the following symmetries:
The Pauli principle and the requirement for electrons to be described by anti-symmetric wavefunctions imply that only the following combinations of spatial and spin symmetry are allowed:
Then one can move to step five in the procedure above, applying Hund's rules.
The group theory method can be carried out for other such configurations, like 3d2, using the general formula
The symmetric square will give rise to singlets (such as 1S, 1D, & 1G), while the anti-symmetric square gives rise to triplets (such as 3P & 3F).
More generally, one can use
where, since the product is not a square, it is not split into symmetric and anti-symmetric parts. Where two electrons come from inequivalent orbitals, both a singlet and a triplet are allowed in each case.
Summary of various coupling schemes and corresponding term symbols
Basic concepts for all coupling schemes:
: individual orbital angular momentum vector for an electron, : individual spin vector for an electron, : individual total angular momentum vector for an electron, .
: Total orbital angular momentum vector for all electrons in an atom ().
: total spin vector for all electrons ().
: total angular momentum vector for all electrons. The way the angular momenta are combined to form depends on the coupling scheme: for LS coupling, for jj coupling, etc.
A quantum number corresponding to the magnitude of a vector is a letter without an arrow, or without boldface (example: ℓ is the orbital angular momentum quantum number for and )
The parameter called multiplicity represents the number of possible values of the total angular momentum quantum number J for certain conditions.
For a single electron, the term symbol is not written as S is always 1/2, and L is obvious from the orbital type.
For two electron groups A and B with their own terms, each term may represent S, L and J which are quantum numbers corresponding to the , and vectors for each group. "Coupling" of terms A and B to form a new term C means finding quantum numbers for new vectors , and . This example is for LS coupling and which vectors are summed in a coupling is depending on which scheme of coupling is taken. Of course, the angular momentum addition rule is that where X can be s, ℓ, j, S, L, J or any other angular momentum-magnitude-related quantum number.
LS coupling (Russell–Saunders coupling)
Coupling scheme: and are calculated first then is obtained. From a practical point of view, it means L, S and J are obtained by using an addition rule of the angular momenta of given electron groups that are to be coupled.
Electronic configuration + Term symbol: . is a term which is from coupling of electrons in group. are principle quantum number, orbital quantum number and means there are N (equivalent) electrons in subshell. For , is equal to multiplicity, a number of possible values in J (final total angular momentum quantum number) from given S and L. For , multiplicity is but is still written in the term symbol. Strictly speaking, is called level and is called term. Sometimes right superscript o is attached to the term symbol, meaning the parity of the group is odd ().
Example:
3d7 4F7/2: 4F7/2 is level of 3d7 group in which are equivalent 7 electrons are in 3d subshell.
3d7(4F)4s4p(3P0) 6F: Terms are assigned for each group (with different principal quantum number n) and rightmost level 6F is from coupling of terms of these groups so 6F represents final total spin quantum number S, total orbital angular momentum quantum number L and total angular momentum quantum number J in this atomic energy level. The symbols 4F and 3Po refer to seven and two electrons respectively so capital letters are used.
4f7(8S0)5d (7Do)6p 8F13/2: There is a space between 5d and (7Do). It means (8S0) and 5d are coupled to get (7Do). Final level 8F is from coupling of (7Do) and 6p.
4f(2F0) 5d2(1G) 6s(2G) 1P: There is only one term 2Fo which is isolated in the left of the leftmost space. It means (2Fo) is coupled lastly; (1G) and 6s are coupled to get (2G) then (2G) and (2Fo) are coupled to get final term 1P.
jj Coupling
Coupling scheme: .
Electronic configuration + Term symbol:
Example:
: There are two groups. One is and the other is . In , there are 2 electrons having in 6p subshell while there is an electron having in the same subshell in . Coupling of these two groups results in (coupling of j of three electrons).
: in () is for 1st group and in () is J2 for 2nd group . Subscript 11/2 of term symbol is final J of .
J1L2 coupling
Coupling scheme: and .
Electronic configuration + Term symbol: . For is equal to multiplicity, a number of possible values in J (final total angular momentum quantum number) from given S2 and K. For , multiplicity is but is still written in the term symbol.
Example:
3p5(2P)5g 2[9/2]: . is K, which comes from coupling of J1 and ℓ2. Subscript 5 in term symbol is J which is from coupling of K and s2.
4f13(2F)5d2(1D) [7/2]: . is K, which comes from coupling of J1 and L2. Subscript in the term symbol is J which is from coupling of K and S2.
LS1 coupling
Coupling scheme:, .
Electronic configuration + Term symbol: . For is equal to multiplicity, a number of possible values in J (final total angular momentum quantum number) from given S2 and K. For , multiplicity is but is still written in the term symbol.
Example:
3d7(4P)4s4p(3Po) Do 3[5/2]: . .
Most famous coupling schemes are introduced here but these schemes can be mixed to express the energy state of an atom. This summary is based on .
Racah notation and Paschen notation
These are notations for describing states of singly excited atoms, especially noble gas atoms. Racah notation is basically a combination of LS or Russell–Saunders coupling and J1L2 coupling. LS coupling is for a parent ion and J1L2 coupling is for a coupling of the parent ion and the excited electron. The parent ion is an unexcited part of the atom. For example, in Ar atom excited from a ground state ...3p6 to an excited state ...3p54p in electronic configuration, 3p5 is for the parent ion while 4p is for the excited electron.
In Racah notation, states of excited atoms are denoted as . Quantities with a subscript 1 are for the parent ion, and are principal and orbital quantum numbers for the excited electron, K and J are quantum numbers for and where and are orbital angular momentum and spin for the excited electron respectively. “o” represents a parity of excited atom. For an inert (noble) gas atom, usual excited states are where N = 2, 3, 4, 5, 6 for Ne, Ar, Kr, Xe, Rn, respectively in order. Since the parent ion can only be 2P1/2 or 2P3/2, the notation can be shortened to or , where means the parent ion is in 2P3/2 while is for the parent ion in 2P1/2 state.
Paschen notation is a somewhat odd notation; it is an old notation made to attempt to fit an emission spectrum of neon to a hydrogen-like theory. It has a rather simple structure to indicate energy levels of an excited atom. The energy levels are denoted as . is just an orbital quantum number of the excited electron. is written in a way that 1s for , 2p for , 2s for , 3p for , 3s for , etc. Rules of writing from the lowest electronic configuration of the excited electron are: (1) is written first, (2) is consecutively written from 1 and the relation of (like a relation between and ) is kept. is an attempt to describe electronic configuration of the excited electron in a way of describing electronic configuration of hydrogen atom. # is an additional number denoted to each energy level of given (there can be multiple energy levels of given electronic configuration, denoted by the term symbol). # denotes each level in order, for example, # = 10 is for a lower energy level than # = 9 level and # = 1 is for the highest level in a given . An example of Paschen notation is below.
See also
Quantum number
Principal quantum number
Azimuthal quantum number
Spin quantum number
Magnetic quantum number
Angular quantum numbers
Angular momentum coupling
Molecular term symbol
Notes
References
Atomic physics
Theoretical chemistry
Quantum chemistry | Term symbol | [
"Physics",
"Chemistry"
] | 4,878 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Atomic physics",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
1,482,581 | https://en.wikipedia.org/wiki/Eudialyte | Eudialyte, whose name derives from the Greek phrase , , meaning "well decomposable", is a somewhat rare, nine-member-ring cyclosilicate mineral, which forms in alkaline igneous rocks, such as nepheline syenites. Its name alludes to its ready solubility in acid.
Eudialyte was first described in 1819 for an occurrence in nepheline syenite of the Ilimaussaq intrusive complex of southwest Greenland.
Uses
Eudialyte is used as a minor ore of zirconium. Another use of eudialyte is as a minor gemstone, but this use is limited by its rarity, which is compounded by its poor crystal habit. These factors make eudialyte of primary interest as a collector's mineral. Eudialyte typically has a significant content of U, Pb, Nb, Ta, Zr, Hf, and rare earth elements (REE). Because of this, geoscientists use eudialyte as a geochronometer to date and investigate the genesis of the host rocks.
Associated minerals
Eudialyte is found associated with other alkalic igneous minerals, in addition to some minerals common to most igneous material in general.
Associate minerals include: microcline, nepheline, aegirine, lamprophyllite, lorenzenite, catapleiite, murmanite, arfvedsonite, sodalite, aenigmatite, rinkite, låvenite, titanite and titanian magnetite.
Alternative names
Alternative names of eudialyte include: almandine spar, eudalite, Saami blood. Eucolite is the name of an optically negative variety, more accurately the group member: ferrokentbrooksite.
Notes for identification
Eudialyte's rarity makes locality useful in its identification. Prominent localities of eudialyte include Mont Saint-Hilaire in Canada, Kola Peninsula in Russia and Poços de Caldas in Brazil, but it is also found in Greenland, Norway, and Arkansas. The lack of crystal habit, associated with color, is also useful for identification, as are associated minerals. A pink-red mineral with no good crystals associated with other alkaline igneous material, especially nepheline and aegirine, is a good indication a specimen is eudialyte. Iron (Fe2+) provides the color.
Eudialyte group
Microchemical (by electron microprobe) and structural analyses of different eudialyte (and related) samples have revealed the presence of many new eudialyte-like minerals. These minerals are structurally and chemically related and joined into the eudialyte group. The group includes Zr−, OH−, Cl−, F−, CO3− and possibly also SO4-bearing silicates of Na, K, H3O, Ca, Sr, REEs, Mn, Fe, Nb and W. Electron vacancies can be present in their structure, too.
References
Mineral Galleries
Further reading
Gemstones
Radioactive gemstones
Sodium minerals
Iron minerals
Zirconium minerals
Manganese minerals
Cyclosilicates
Trigonal minerals
Minerals in space group 166 | Eudialyte | [
"Physics"
] | 670 | [
"Materials",
"Gemstones",
"Matter"
] |
1,483,291 | https://en.wikipedia.org/wiki/Work%20%28electric%20field%29 | Electric field work is the work performed by an electric field on a charged particle in its vicinity. The particle located experiences an interaction with the electric field. The work per unit of charge is defined by moving a negligible test charge between two points, and is expressed as the difference in electric potential at those points. The work can be done, for example, by electrochemical devices (electrochemical cells) or different metals junctions generating an electromotive force.
Electric field work is formally equivalent to work by other force fields in physics, and the formalism for electrical work is identical to that of mechanical work.
Physical process
Particles that are free to move, if positively charged, normally tend towards regions of lower electric potential (net negative charge), while negatively charged particles tend to shift towards regions of higher potential (net positive charge).
Any movement of a positive charge into a region of higher potential requires external work to be done against the electric field, which is equal to the work that the electric field would do in moving that positive charge the same distance in the opposite direction. Similarly, it requires positive external work to transfer a negatively charged particle from a region of higher potential to a region of lower potential.
Kirchhoff's voltage law, one of the most fundamental laws governing electrical and electronic circuits, tells us that the voltage gains and the drops in any electrical circuit always sum to zero.
The formalism for electric work has an equivalent format to that of mechanical work. The work per unit of charge, when moving a negligible test charge between two points, is defined as the voltage between those points.
where
Q is the electric charge of the particle
E is the electric field, which at a location is the force at that location divided by a unit ('test') charge
FE is the Coulomb (electric) force
r is the displacement
is the dot product operator
Mathematical description
Given a charged object in empty space, Q+. To move q+ closer to Q+ (starting from , where the potential energy=0, for convenience), we would have to apply an external force against the Coulomb field and positive work would be performed. Mathematically, using the definition of a conservative force, we know that we can relate this force to a potential energy gradient as:
Where U(r) is the potential energy of q+ at a distance r from the source Q. So, integrating and using Coulomb's Law for the force:
Now, use the relationship
To show that the external work done to move a point charge q+ from infinity to a distance r is:
This could have been obtained equally by using the definition of W and integrating F with respect to r, which will prove the above relationship.
In the example both charges are positive; this equation is applicable to any charge configuration (as the product of the charges will be either positive or negative according to their (dis)similarity).
If one of the charges were to be negative in the earlier example, the work taken to wrench that charge away to infinity would be exactly the same as the work needed in the earlier example to push that charge back to that same position.
This is easy to see mathematically, as reversing the boundaries of integration reverses the sign.
Uniform electric field
Where the electric field is constant (i.e. not a function of displacement, r), the work equation simplifies to:
or 'force times distance' (times the cosine of the angle between them).
Electric power
The electric power is the rate of energy transferred in an electric circuit. As a partial derivative, it is expressed as the change of work over time:
,
where V is the voltage. Work is defined by:
Therefore
References
Electromagnetism | Work (electric field) | [
"Physics"
] | 760 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
1,483,407 | https://en.wikipedia.org/wiki/Host%20Identity%20Protocol | The Host Identity Protocol (HIP) is a host identification technology for use on Internet Protocol (IP) networks, such as the Internet. The Internet has two main name spaces, IP addresses and the Domain Name System. HIP separates the end-point identifier and locator roles of IP addresses. It introduces a Host Identity (HI) name space, based on a public key security infrastructure.
The Host Identity Protocol provides secure methods for IP multihoming and mobile computing.
In networks that implement the Host Identity Protocol, all occurrences of IP addresses in applications are eliminated and replaced with cryptographic host identifiers. The cryptographic keys are typically, but not necessarily, self-generated.
The effect of eliminating IP addresses in application and transport layers is a decoupling of the transport layer from the internetworking layer (Internet Layer) in TCP/IP.
HIP was specified in the IETF HIP working group. An Internet Research Task Force (IRTF) HIP research group looks at the broader impacts of HIP.
The working group is chartered to produce Requests for Comments on the "Experimental" track, but it is understood that their quality and security properties should match the standards track requirements. The main purpose for producing Experimental documents instead of standards track ones are the unknown effects that the mechanisms may have on applications and on the Internet in the large.
Version 2
Host Identity Protocol version 2 (HIPv2), also known as HIP version 2, is an update to the protocol that enhances security and support for mobile environments. HIP continues to separate the roles of identification and location in IP addressing by implementing a host identity namespace based on cryptography. This version introduces new features that allow devices to connect more securely and efficiently, even in scenarios involving mobility and multihoming (connecting to multiple networks).
Enhanced security
HIPv2 strengthens device authentication security and provides protection against spoofing and denial-of-service (DoS) attacks. Host Identifiers (HIs) are generated with cryptographic keys, giving each device a unique identity. The protocol also uses the Encapsulating Security Payload (ESP) format for encrypting data, which ensures the integrity and confidentiality of communications.
Mobility and multihoming
HIPv2's design enables devices to change networks without losing the session, a crucial advantage for mobile and IoT applications. This capability to switch networks seamlessly makes HIPv2 well-suited for devices that require constant and reliable connectivity, such as mobile phones and IoT sensors. Additionally, HIPv2 facilitates multihoming, allowing simultaneous connections to multiple networks, which improves connection resilience and availability.
RFC references
- Host Identity Protocol (HIP) Architecture (early "informational" snapshot, obsoleted by RFC 9063)
- Host Identity Protocol base (Obsoleted by RFC 7401)
- Using the Encapsulating Security Payload (ESP) Transport Format with the Host Identity Protocol (HIP) (Obsoleted by RFC 7402)
- Host Identity Protocol (HIP) Registration Extension (obsoleted by RFC 8003)
- Host Identity Protocol (HIP) Rendezvous Extension (obsoleted by RFC 8004)
- Host Identity Protocol (HIP) Domain Name System (DNS) Extension (obsoleted by RFC 8005)
- End-Host Mobility and Multihoming with the Host Identity Protocol
- NAT and Firewall Traversal Issues of Host Identity Protocol (HIP) Communication
- Basic Requirements for IPv6 Customer Edge Routers
- Host identity protocol version 2 (HIPv2) (updated by RFC 8002)
- Using the Encapsulating Security Payload (ESP) transport format with the Host Identity Protocol (HIP)
- Host Identity Protocol Certificates
- Host Identity Protocol (HIP) Registration Extension
- Host Identity Protocol (HIP) Rendezvous Extension
- Host Identity Protocol (HIP) Domain Name System (DNS) Extension
- Host Mobility with the Host Identity Protocol
- Host Multihoming with the Host Identity Protocol
- Native NAT Traversal Mode for the Host Identity Protocol
- Host Identity Protocol Architecture
See also
Identifier-Locator Network Protocol (ILNP)
IPsec
Locator/Identifier Separation Protocol (LISP)
Mobile IP (MIP)
Proxy Mobile IPv6 (PMIPv6)
References
External links
IETF HIP working group
IRTF HIP research group
OpenHIP project
How HIP works - InfraHIP project archive
HIP simulation framework for OMNeT++.
Internet protocols
Multihoming
Cryptographic protocols
Computer network security
IPsec | Host Identity Protocol | [
"Engineering"
] | 913 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
33,311,026 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2057 | In molecular biology, glycoside hydrolase family 57 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 57 CAZY GH_57 comprises enzymes with several known activities; alpha-amylase (), 4-alpha-glucanotransferase (), α-galactosidase (); amylopullulanase (); branching enzyme (). It includes a thermostable alpha-amylase with a broad substrate specificity from the archaebacterium Pyrococcus furiosus.
External links
GH57 in CAZypedia
References
EC 3.2.1
Glycoside hydrolase families
Protein domains | Glycoside hydrolase family 57 | [
"Biology"
] | 265 | [
"Protein domains",
"Protein classification"
] |
33,312,058 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2080 | In molecular biology, glycoside hydrolase family 80 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 80 CAZY GH_80 includes enzymes with chitosanase activity.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 80 | [
"Biology"
] | 179 | [
"Protein families",
"Protein classification"
] |
33,312,550 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2044 | In molecular biology, glycoside hydrolase family 44 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 44 CAZY GH_44, formerly known as cellulase family J, includes enzymes with endoglucanase and xyloglucanase activities. The overall structure of enzymes in this family consists of a TIM-like barrel domain, a beta-sandwich domain and an active site with two glutamic acid residues, all of which are conserved between the endoglucanases and xyloglucanases in the family, with only minor differences.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 44 | [
"Biology"
] | 260 | [
"Protein families",
"Protein classification"
] |
33,314,617 | https://en.wikipedia.org/wiki/Radiation%20sensitivity | Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation. Examples of radiation sensitive materials are silver chloride, photoresists and biomaterials. Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire. The radiation effect depends on the type of the irradiating particles, their energy, and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered.
See also
Geochronometry- the quantitative measurement of geologic time
Fission track dating- the radiometric dating technique based on fission fragments
Radiosensitivity- the susceptibility of living cells, tissues, organs or organisms to the effects of ionizing radiation
References
Radiation effects | Radiation sensitivity | [
"Physics",
"Materials_science",
"Engineering"
] | 221 | [
"Physical phenomena",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
33,316,056 | https://en.wikipedia.org/wiki/Icosahedrite | Icosahedrite is the first known naturally occurring quasicrystal phase. It has the composition Al63Cu24Fe13 and is a mineral approved by the International Mineralogical Association in 2010. Its discovery followed a 10-year-long systematic search by an international team of scientists led by Luca Bindi and Paul J. Steinhardt to find the first natural quasicrystal.
It occurs as tiny grains in a small sample labelled "khatyrkite" (catalog number 46407/G, housed in The Museum of Natural History, University of Florence, Italy), collected from an outcrop of weathered serpentinite in the Khatyrka ultramafic zone of the Koryak-Kamchatka area, Koryak Mountains, Russia. The rock sample also contains spinel, diopside, forsterite, nepheline, sodalite, corundum, stishovite, khatyrkite, cupalite and an unnamed AlCuFe alloy. Evidence shows that the sample is actually extraterrestrial in origin, delivered to the Earth by a CV3 carbonaceous chondrite asteroid that dates back 4.5 Gya. A geological expedition has identified the exact place of the original discovery and found more specimens of the meteorite.
The same Al-Cu-Fe quasicrystal phase had previously been created in the laboratory by Japanese experimental metallurgists in the late 1980s.
The concept of quasicrystals — along with the term — was first introduced in 1984 by Steinhardt and Dov Levine, both then at the University of Pennsylvania. The first synthetic quasicrystal, a combination of aluminium and manganese, was reported in 1984 by Israeli materials scientist Dan Shechtman and colleagues at the U.S. National Institute of Standards and Technology, a finding for which Shechtman won the 2011 Nobel Prize for Chemistry.
References
Native element minerals
Aluminium minerals
Copper minerals
Iron minerals
Geology of Russia
Quasicrystals
Copper alloys
Aluminium alloys | Icosahedrite | [
"Physics",
"Chemistry",
"Materials_science"
] | 415 | [
"Copper alloys",
"Tessellation",
"Aluminium alloys",
"Crystallography",
"Alloys",
"Quasicrystals",
"Symmetry"
] |
33,320,313 | https://en.wikipedia.org/wiki/Oligocrystalline%20material | Oligocrystalline material owns a microstructure consisting of a few coarse grains, often columnar and parallel to the longitudinal ingot axis. This microstructure can be found in the ingots produced by electron beam melting (EBM).
References
Crystallography | Oligocrystalline material | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 60 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
33,320,954 | https://en.wikipedia.org/wiki/Lysophosphatidylethanolamine | A lysophosphatidylethanolamine (LPE) is a chemical compound derived from a phosphatidylethanolamine, which is typical of cell membranes. LPE results from partial hydrolysis of phosphatidylethanolamine, which removes one of the fatty acid groups. The hydrolysis is generally the result of the enzymatic action of phospholipase A2. LPE can be used in agricultural use to regulate plant growth such as color increase, sugar content increase, plant health increase, and storability increase without side effect.
LPE is present as a minor phospholipid in the cell membrane. Actually, LPE was detected in human serum, and its level is reported to be about several hundred ng mL−1. Available sources of LPE are egg yolk lecithin (≤1.5%), soybean lecithin (≤0.2%), and other lecithins.
Function
Lysophosphatidylethanolamine (LPE also, lisophos ) is a minor constituent of cell membranes(natural compound). LPE plays a role in cell-mediated cell signaling and activation of other enzymes. The physiological significance of the plasma LPE remains unknown. However, LPE has antifungal and antibacterial activity in the housefly, and in certain mushrooms, it stimulates the MAPK cascade.
Previous studies showed that LPE, a natural phospholipid, can accelerate ripening and prolong shelf life of tomato fruit, and retard senescence in attached and detached leaves and fruit of tomato. In other studies, LPE inhibited the activity of phospholipase D (PLD), a membrane degrading enzyme, of which active is increased during senescence. More recently, it is reported that LPE can also accelerate color development and promote shelf life of cranberries, and increase fruit qualities of Thompson seedless grapes, in such as soluble solids content (SSC), titratable acidity (TA), firmness, and size. Along with these results show that LPE can accelerate ripening of fruit and also, have potential to protect senescence.
Structure and chemistry
Lysophosphatidylethanolamine (LPE) is composed of an ethanolamine head group and glycerophosphoric acid with a various fatty acid located sn-1 position. The fatty acid may be saturated or unsaturated acyl.
Chemical name: 1-Acyl-sn-glycero-3-phospho(2-aminoethanol)
CAS number: 95046-40-5
Molecular weight: ≅479
Uses
Lysophosphatidylethanolamine (LPE) is a minor membrane glycerolipid; however, it has been reported that it has useful physiological effects on fruit and vegetable crops. LPE was approved by the United States Environmental Protection Agency for use on agricultural crops. It is used with tomatoes, peppers, grapes, cranberry, and oranges for increasing color, sugar contents and their storage life. In addition, it is reported that LPE can delay senescence in leaves and fruits, and mitigate stress of ethylene-induced process.
SignaFresh, the brand name made of LPE, is helpful for crops to be valuable product. Preharvest application of SignaFresh leads ideal postharvest behaviors.
See also
Phospholipid
Phospholipase A2
Phospholipase D
Phenylalanine ammonia lyase (PAL)
Acid invertase (Ac INV)
SignaFresh
References
Phospholipids | Lysophosphatidylethanolamine | [
"Chemistry"
] | 780 | [
"Phospholipids",
"Signal transduction"
] |
3,983,509 | https://en.wikipedia.org/wiki/Effect%20of%20Sun%20angle%20on%20climate | The amount of heat energy received at any location on the globe is a direct effect of Sun angle on climate, as the angle at which sunlight strikes Earth varies by location, time of day, and season due to Earth's orbit around the Sun and Earth's rotation around its tilted axis. Seasonal change in the angle of sunlight, caused by the tilt of Earth's axis, is the basic mechanism that results in warmer weather in summer than in winter. Change in day length is another factor (albeit lesser).
Geometry of Sun angle
Figure 1 presents a case when sunlight shines on Earth at a lower angle (Sun closer to the horizon), the energy of the sunlight is spread over a larger area, and is therefore weaker than if the Sun is higher overhead and the energy is concentrated on a smaller area.
Figure 2 depicts a sunbeam wide falling on the ground from directly overhead, and another hitting the ground at a 30° angle. Trigonometry tells us that the sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the sunbeam hitting the ground at a 30° angle spreads the same amount of light over twice as much area (if we imagine the Sun shining from the south at noon, the north–south width doubles; the east–west width does not). Consequently, the amount of light falling on each square mile is only half as much.
Figure 3 shows the angle of sunlight striking Earth in the Northern and Southern Hemispheres when Earth's northern axis is tilted away from the Sun, when it is winter in the north and summer in the south.
Obliquity, seasonality, and climate
Differing sun angle results in differing temperatures between lower and higher latitudes, and between winter and summer at the same latitude (although "winter" and "summer" are more complicated in the tropics.
At fixed latitude, the size of the seasonal difference in sun angle (and thus the seasonal temperature variation) is equal to double the Earth's axial tilt. For example, with an axial tilt is 23°, and at a latitude of 45°, then the summer's peak sun angle is 68° (giving sin(68°) = 93% insolation at the surface), while winter's least sun angle is 22° (giving sin(22°) = 37% insolation at the surface). Thus, the greater the axial tilt, the stronger the seasons' variations at a given latitude.
In addition to seasonal variation at fixed latitude, the total annual surface) insolation as a function of latitude also depends on the axial tilt. At the equator (0° latitude), on the equinoxes, the sun angle is always 90° no matter the axial tilt, while on the solstices the minimum sun angle is equal to 90° minus the tilt. Therefore, greater tilt means a lower minimum for the same maximum: less total annual surface insolation at the equator. At the poles (90° latitude), on the equinoxes and during polar night, the sun angle is always 0° or less no matter the axial tilt, while on the summer solstice, the maximum angle is equal to the tilt. Therefore, greater tilt means a higher maximum for the same minimum: more total annual surface insolation at the poles. Therefore, lesser tilt means a wider annual temperature gap between equator and poles, while greater tilt means a smaller annual temperature gap between equator and poles. (At an extreme tilt, such as that of Uranus, the poles can receive similar annual surface insolation to the equator.) In particular, at Earth temperatures, and all else being equal, greater tilt warms the poles and thus reduces polar ice coverage, while lesser tilt cools the poles and thus increases polar ice coverage.
One of the first to publish on these effects was Milutin Milanković; the cyclic effects of axial tilt, eccentricity, and other orbital parameters upon global climate were named Milanković cycles. Although individual mechanisms (such as axial tilt and sun angle) are thought to be understood, the overall impact of orbital forcing on global climate remains poorly constrained.
See also
Sun path
Axial tilt
Solar irradiance (insolation)
Orbital forcing
Milanković cycles
Notes
References
Climate variability and change
Seasons
Atmospheric radiation | Effect of Sun angle on climate | [
"Physics"
] | 887 | [
"Physical phenomena",
"Earth phenomena",
"Seasons"
] |
3,984,093 | https://en.wikipedia.org/wiki/Ramelteon | Ramelteon, sold under the brand name Rozerem among others, is a melatonin agonist medication which is used in the treatment of insomnia. It is indicated specifically for the treatment of insomnia characterized by difficulties with sleep onset. It reduces the time taken to fall asleep, but the degree of clinical benefit is small. The medication is approved for long-term use. Ramelteon is taken by mouth.
Side effects of ramelteon include somnolence, dizziness, fatigue, nausea, exacerbated insomnia, and changes in hormone levels. Ramelteon is an analogue of melatonin and is a selective agonist of the melatonin MT1 and MT2 receptors. The half-life and duration of ramelteon are much longer than those of melatonin. Ramelteon is not a benzodiazepine or Z-drug and does not interact with GABA receptors, instead having a distinct mechanism of action.
Ramelteon was first described in 2002 and was approved for medical use in 2005. Unlike certain other sleep medications, ramelteon is not a controlled substance in nearly every country and has no known potential for misuse.
Medical uses
Insomnia
Ramelteon is approved for the treatment of insomnia characterized by difficulty with sleep onset in adults. In regulatory clinical trials, it was found to significantly reduce latency to persistent sleep (LPS). A 2009 pooled analysis of four clinical trials found that ramelteon at a dose of 8 mg reduced sleep onset by 13 minutes (30% decrease) relative to placebo on the first and second nights of use. Subsequent meta-analyses of longer-duration use have found that ramelteon decreases subjective sleep latency by about 4 to 7 minutes. Meta-analyses are mixed on whether ramelteon increases total sleep time. Ramelteon also improves sleep quality ( –0.074, 95% –0.13 to –0.02) and sleep efficiency. The clinical improvement in insomnia with ramelteon is small and of questionable benefit.
Ramelteon is approved in the United States but was not approved in the European Union owing to concerns that it lacked effectiveness. The Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) noted that ramelteon had only been found to improve sleep onset and not other sleep outcomes, only one of three clinical trials actually found that it improved sleep onset, and that the improvement in sleep onset was too small to be clinically meaningful. The CHMP also noted that the long-term effectiveness of ramelteon had not been demonstrated.
The American Academy of Sleep Medicine's 2017 clinical practice guidelines recommended the use of ramelteon in the treatment of sleep-onset insomnia. It rated the recommendation as weak and the quality of evidence as very low but concluded that the potential benefits outweighed the potential harms. The guidelines found that ramelteon reduces sleep latency by 9minutes (95% 6–12minutes) but does not improve sleep quality. In contrast to ramelteon, the guidelines did not recommend the use of melatonin.
Circadian rhythm sleep disorders
Melatonin receptor agonists like melatonin and tasimelteon are considered to be effective in regulating sleep–wake cycles and in the treatment of circadian rhythm sleep disorders like delayed sleep phase disorder. Ramelteon has been assessed in only a few studies in the treatment of circadian rhythm sleep disorders, including jet lag disorder, shift work disorder, and non-24-hour sleep–wake disorder. These studies have been of varying quality and their findings in terms of effectiveness have been mixed. Ramelteon is approved only for treatment of insomnia and is not approved for treatment of circadian rhythm sleep disorders. It was previously under development for treatment of circadian rhythm sleep disorders, but development for these indications was discontinued.
Other uses
Delirium
A systematic review, published in 2014, concluded "ramelteon was found to be beneficial in preventing delirium in medically ill individuals when compared to placebo." A 2022 systematic review and meta-analysis found that the combination of ramelteon and the orexin receptor antagonist suvorexant may reduce the incidence of delirium in adults hospitalized patients whereas suvorexant alone was ineffective.
Bipolar disorder
Ramelteon has received attention in psychiatry as a possible add-on treatment for mania in bipolar disorder. However, to date, the scarce available evidence fails to support the clinical utility of ramelteon and other melatonin receptor agonists such as melatonin for mania.
Available forms
Ramelteon is available in the form of 8 mg oral film-coated tablets.
Contraindications
Ramelteon is not recommended for use in people with severe sleep apnea.
Adverse effects
Side effects of ramelteon include somnolence (3% vs. 2% for placebo), fatigue (3% vs. 2% for placebo), dizziness (4% vs. 3% for placebo), nausea (3% vs. 2% for placebo), and exacerbated insomnia (3% vs. 2% for placebo). Overall, side effects occurred in 6% with ramelteon and 2% with placebo in clinical trials. Side effects leading to discontinuation occurred in 1% or fewer people. Rarely, anaphylactic reactions, abnormal thinking, and worsening of depression or suicidal thinking in patients with pre-existing depression may occur with ramelteon. Ramelteon has been found to slightly increase prolactin levels in women (+34% vs. –4% with placebo) but not in men and to decrease free testosterone levels (by 3–6% in younger men and by 13–18% in older men).
Ramelteon has not been shown to produce dependence and has shown no potential for abuse. The withdrawal and rebound insomnia that is typical with GABAA receptor positive modulators like benzodiazepines and Z-drugs is not present in ramelteon.
Increased incidence of liver and testicular tumors have been observed with ramelteon in rodents but only at doses equivalent to at least 20 times greater than the recommended dose in humans.
Overdose
Ramelteon has been assessed at doses of up to 64 mg in clinical studies.
Interactions
Ramelteon has been evaluated for potential drug interactions with the following medications and showed no significant effects: omeprazole, theophylline, dextromethorphan, and midazolam, digoxin and warfarin. There were no clinically meaningful effects when ramelteon was coadministered with any of these drugs.
A drug interaction study showed that there were no clinically meaningful effects or an increase in adverse events when ramelteon and the SSRI Prozac (fluoxetine) were coadministered. When coadministered with ramelteon, fluvoxamine (strong CYP1A2 inhibitor) increased AUC approximately 190-fold, and the Cmax increased approximately 70-fold, compared to ramelteon administered alone. Ramelteon and fluvoxamine should not be coadministered.
Ramelteon has significant drug–drug interaction with the following drugs: .
Ramelteon should be administered with caution in patients taking other CYP1A2 inhibitors, strong CYP3A4 inhibitors such as ketoconazole, and strong CYP2C9 inhibitors such as fluconazole.
Efficacy may be reduced when ramelteon is used in combination with potent CYP enzyme inducers such as rifampin, since ramelteon concentrations may be decreased.
Pharmacology
Pharmacodynamics
Ramelteon is a melatonin receptor agonist with both high affinity for melatonin MT1 and MT2 receptors and selectivity over the non-human MT3 receptor. Ramelteon demonstrates full agonist activity in vitro in cells expressing human MT1 or MT2 receptors, and high selectivity for human MT1 and MT2 receptors compared to the non-human MT3 receptor. The affinity of ramelteon for the MT1 and MT2 receptors is 3 to 16 times higher than that of melatonin. Ramelteon has 8-fold higher affinity for the MT1 receptor over the MT2 receptor. The binding profile of ramelteon distinguishes it from melatonin, tasimelteon, and agomelatine. Remelteon has a clinically irrelevant affinity for the serotonin 5-HT1A receptor (Ki = 5.6 μM).
The major metabolite of ramelteon, M-II, is active and has approximately one-tenth and one-fifth the binding affinity of the parent molecule for the human MT1 and MT2 receptors, respectively, and is 17- to 25-fold less potent than ramelteon in in vitro functional assays. Although the potency of M-II at MT1 and MT2 receptors is lower than the parent drug, M-II circulates at higher concentrations than the parent producing 20- to 100-fold greater mean systemic exposure when compared to ramelteon. M-II has weak affinity for the serotonin 5-HT2B receptor, but no appreciable affinity for other receptors or enzymes. Similar to ramelteon, M-II does not interfere with the activity of a number of endogenous enzymes.
Ramelteon has no appreciable affinity for the GABA receptors or for receptors that bind neuropeptides, cytokines, serotonin, dopamine, noradrenaline, acetylcholine, and opioids. Ramelteon also does not interfere with the activity of a number of selected enzymes in a standard panel.
Mechanism of action
The activity of ramelteon at the MT1 and MT2 receptors in the suprachiasmatic nucleus of the hypothalamus is believed to contribute to its sleep-promoting properties, as these receptors, acted upon by endogenous melatonin, are thought to be involved in the maintenance of the circadian rhythm underlying the normal sleep–wake cycle.
Pharmacokinetics
Absorption
The total absorption of ramelteon is 84% while its oral bioavailability is 1.8%. The low bioavailability of ramelteon is due to extensive first-pass metabolism. Ramelteon has a higher lipophilicity than melatonin and thus permeates more easily into tissue. The absorption of ramelteon is rapid, with peak levels being reached after approximately 0.75 hours (range 0.5–1.5 hours). Food increases peak concentrations of ramelteon by 22% and overall exposure by 31% and delays the time to peak levels by approximately 0.75 hours. The pharmacokinetics of ramelteon are linear across a dose range of 4 to 64 mg. There is substantial interindividual variability in the peak concentrations and area-under-the-curve levels of ramelteon which is consistent with high first-pass metabolism.
Distribution
The volume of distribution of ramelteon is 73.6 L, which suggests substantial tissue distribution. The plasma protein binding of ramelteon is 82% independently of concentration. Ramelteon is primarily bound to albumin (70%). The medication is not selectively distributed to red blood cells.
Metabolism
Ramelteon is metabolized in the liver primarily by oxidation via hydroxylation and carbonylation. It is also secondarily metabolized to produce glucuronide conjugates. Ramelteon is metabolized mainly by CYP1A2 while CYP2C enzymes and CYP3A4 are involved to a minor extent. The metabolites of ramelteon include M-I, M-II, M-III, and M-IV. Exposure to M-II is approximately 20- to 100-fold higher than to ramelteon.
Elimination
Ramelteon is excreted 84% in urine and 4% in feces. Less than 0.1% of drug is excreted as unchanged ramelteon. Elimination of ramelteon is essentially complete by 96 hours following a single dose.
The elimination half-life of ramelteon is 1 to 2.6 hours while the half-life of M-II, the major active metabolite of ramelteon, is 2 to 5 hours. The half-lives of ramelteon and M-II are substantially longer than that of melatonin, which has a half-life in the range of 20 to 45 minutes. Levels of ramelteon and its metabolites at or below the limit of detectability within 24 hours following a dose.
Special populations
Peak levels of ramelteon and overall exposure are about 86% and 97% higher, respectively, in elderly adults compared to younger adults. Conversely, peak levels of M-II are 13% and overall exposure 30% higher in elderly adults than in younger adults. The elimination half-life of ramelteon is 2.6 hours in elderly adults.
History
Ramelteon was first described in the medical literature in 2002. It was approved for use in the United States in July 2005.
Society and culture
Ramelteon has no potential for abuse or drug dependence and is not a controlled substance.
Research
Ramelteon, along with other melatonin receptor agonists like melatonin, has been repurposed in clinical trials as an adjunctive treatment for acute manic episodes in subjects with bipolar disorder. Nonetheless, meta-analytic evidence is based on very few trials and does not allow supporting the use of melatonin receptor agonists as adjunctive options for mania in clinical practice, although the small sample size do not allow ruling out their beneficial effect.
References
Benzofurans
Indanes
Melatonin receptor agonists
Propionamides
Sedatives
Drugs developed by Takeda Pharmaceutical Company | Ramelteon | [
"Chemistry"
] | 2,906 | [
"Melatonin receptor agonists",
"Drug discovery"
] |
3,984,582 | https://en.wikipedia.org/wiki/Generalized%20tree%20alignment | In computational phylogenetics, generalized tree alignment is the problem of producing a multiple sequence alignment and a phylogenetic tree on a set of sequences simultaneously, as opposed to separately.
Formally, Generalized tree alignment is the following optimization problem.
Input: A set and an edit distance function between sequences,
Output: A tree leaf-labeled by and labeled with sequences at the internal nodes, such that is minimized, where is the edit distance between the endpoints of .
Note that this is in contrast to tree alignment, where the tree is provided as input.
References
Computational phylogenetics | Generalized tree alignment | [
"Chemistry",
"Biology"
] | 114 | [
"Genetics techniques",
"Computational phylogenetics",
"Bioinformatics stubs",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics",
"Phylogenetics"
] |
3,984,833 | https://en.wikipedia.org/wiki/Ripple%20%28electrical%29 | Ripple (specifically ripple voltage) in electronics is the residual periodic variation of the DC voltage within a power supply which has been derived from an alternating current (AC) source. This ripple is due to incomplete suppression of the alternating waveform after rectification. Ripple voltage originates as the output of a rectifier or from generation and commutation of DC power.
Ripple (specifically ripple current or surge current) may also refer to the pulsed current consumption of non-linear devices like capacitor-input rectifiers.
As well as these time-varying phenomena, there is a frequency domain ripple that arises in some classes of filter and other signal processing networks. In this case the periodic variation is a variation in the insertion loss of the network against increasing frequency. The variation may not be strictly linearly periodic. In this meaning also, ripple is usually to be considered an incidental effect, its existence being a compromise between the amount of ripple and other design parameters.
Ripple is wasted power, and has many undesirable effects in a DC circuit: it heats components, causes noise and distortion, and may cause digital circuits to operate improperly. Ripple may be reduced by an electronic filter, and eliminated by a voltage regulator.
Voltage ripple
A non-ideal DC voltage waveform can be viewed as a composite of a constant DC component (offset) with an alternating (AC) voltage—the ripple voltage—overlaid. The ripple component is often small in magnitude relative to the DC component, but in absolute terms, ripple (as in the case of HVDC transmission systems) may be thousands of volts. Ripple itself is a composite (non-sinusoidal) waveform consisting of harmonics of some fundamental frequency which is usually the original AC line frequency, but in the case of switched-mode power supplies, the fundamental frequency can be tens of kilohertz to megahertz. The characteristics and components of ripple depend on its source: there is single-phase half- and full-wave rectification, and three-phase half- and full-wave rectification. Rectification can be controlled (uses Silicon Controlled Rectifiers (SCRs)) or uncontrolled (uses diodes). There is in addition, active rectification which uses transistors.
Various properties of ripple voltage may be important depending on application: the equation of the ripple for Fourier analysis to determine the constituent harmonics; the peak (usually peak-to-peak) value of the voltage; the root mean square (RMS) value of the voltage which is a component of power transmitted; the ripple factor γ, the ratio of RMS value to DC voltage output; the conversion ratio (also called the rectification ratio or "efficiency") η, the ratio of DC output power to AC input power; and form-factor, the ratio of the RMS value of the output voltage to the average value of the output voltage. Analogous ratios for output ripple current may also be computed.
An electronic filter with high impedance at the ripple frequency may be used to reduce ripple voltage and increase or decrease DC output; such a filter is often called a smoothing filter.
The initial step in AC to DC conversion is to send the AC current through a rectifier. The ripple voltage output is very large in this situation; the peak-to-peak ripple voltage is equal to the peak AC voltage minus the forward voltage of the rectifier diodes. In the case of an SS silicon diode, the forward voltage is 0.7V; for vacuum tube rectifiers, forward voltage usually ranges between 25 and 67V (5R4). The output voltage is a sine wave with the negative half-cycles inverted. The equation is:
The Fourier expansion of the function is:
Several relevant properties are apparent on inspection of the Fourier series:
the constant (largest) term must be the DC voltage
the fundamental (line frequency) is not present
the expansion consists of only even harmonics of the fundamental
the amplitude of the harmonics is proportional to where is the order of the harmonic
the term for the second-order harmonic is often used to represent the entire ripple voltage to simplify computation
The output voltages are:
where
is the time-varying voltage across the load, for period 0 to T
is the period of , may be taken as radians
The ripple factor is:
The form factor is:
The peak factor is:
The conversion ratio is:
The transformer utilization factor is:
Filtering
Reducing ripple is only one of several principal considerations in power supply filter design. The filtering of ripple voltage is analogous to filtering other kinds of signals. However, in AC/DC power conversion as well as DC power generation, high voltages and currents or both may be output as ripple. Therefore, large discrete components like high ripple-current rated electrolytic capacitors, large iron-core chokes and wire-wound power resistors are best suited to reduce ripple to manageable proportions before passing the current to an IC component like a voltage regulator, or on to the load. The kind of filtering required depends on the amplitude of the various harmonics of the ripple and the demands of the load. For example, a moving coil (MC) input circuit of a phono preamplifier may require that ripple be reduced to no more than a few hundred nanovolts (10−9V). In contrast, a battery charger, being a wholly resistive circuit, does not require any ripple filtering. Since the desired output is direct current (essentially 0Hz), ripple filters are usually configured as low pass filters characterized by shunt capacitors and series chokes. Series resistors may replace chokes for reducing the output DC voltage, and shunt resistors may be used for voltage regulation.
Filtering in power supplies
Most power supplies are now switched mode designs. The filtering requirements for such power supplies are much easier to meet owing to the high frequency of the ripple waveform. The ripple frequency in switch-mode power supplies is not related to the line frequency, but is instead a multiple of the frequency of the chopper circuit, which is usually in the range of 50kHz to 1MHz.
Capacitor vs choke input filters
A capacitor input filter (in which the first component is a shunt capacitor) and choke input filter (which has a series choke as the first component) can both reduce ripple, but have opposing effects on voltage and current, and the choice between them depends on the characteristics of the load. Capacitor input filters have poor voltage regulation, so are preferred for use in circuits with stable loads and low currents (because low currents reduce ripple here). Choke input filters are preferred for circuits with variable loads and high currents (since a choke outputs a stable voltage and higher current means less ripple in this case).
The number of reactive components in a filter is called its order. Each reactive component reduces signal strength by 6dB/octave above (or below for a high-pass filter) the corner frequency of the filter, so that a 2nd-order low-pass filter for example, reduces signal strength by 12dB/octave above the corner frequency. Resistive components (including resistors and parasitic elements like the DCR of chokes and ESR of capacitors) also reduce signal strength, but their effect is linear, and does not vary with frequency.
A common arrangement is to allow the rectifier to work into a large smoothing capacitor which acts as a reservoir. After a peak in output voltage the capacitor supplies the current to the load and continues to do so until the capacitor voltage has fallen to the value of the now rising next half-cycle of rectified voltage. At that point the rectifier conducts again and delivers current to the reservoir until peak voltage is again reached.
As a function of load resistance
If the RC time constant is large in comparison to the period of the AC waveform, then a reasonably accurate approximation can be made by assuming that the capacitor voltage falls linearly. A further useful assumption can be made if the ripple is small compared to the DC voltage. In this case the phase angle through which the rectifier conducts will be small and it can be assumed that the capacitor is discharging all the way from one peak to the next with little loss of accuracy.
With the above assumptions the peak-to-peak ripple voltage can be calculated as:
The definition of capacitance and current are
where is the amount of charge. The current and time is taken from start of capacitor discharge until the minimum voltage on a full wave rectified signal as shown on the figure to the right. The time would then be equal to half the period of the full wave input.
Combining the three equations above to determine gives,
Thus, for a full-wave rectifier:
where
is the peak-to-peak ripple voltage
is the current in the circuit
is the source (line) frequency of the AC power
is the capacitance
For the RMS value of the ripple voltage, the calculation is more involved as the shape of the ripple waveform has a bearing on the result. Assuming a sawtooth waveform is a similar assumption to the ones above. The RMS value of a sawtooth wave is where is peak voltage. With the further approximation that is , it yields the result:
where
where
is the ripple factor
is the resistance of the load
For the approximated formula, it is assumed that XC ≪ R; this is a little larger than the actual value because a sawtooth wave comprises odd harmonics that aren't present in the rectified voltage.
As a function of series choke
Another approach to reducing ripple is to use a series choke. A choke has a filtering action and consequently produces a smoother waveform with fewer high-order harmonics. Against this, the DC output is close to the average input voltage as opposed to the voltage with the reservoir capacitor which is close to the peak input voltage. Starting with the Fourier term for the second harmonic, and ignoring higher-order harmonics,
the ripple factor is given by:
For
This is a little less than 0.483 because higher-order harmonics were omitted from consideration. (See Inductance.)
There is a minimum inductance (which is relative to the resistance of the load) required in order for a series choke to continuously conduct current. If the inductance falls below that value, current will be intermittent and output DC voltage will rise from the average input voltage to the peak input voltage; in effect, the inductor will behave like a capacitor. That minimum inductance, called the critical inductance is where R is the load resistance and f the line frequency. This gives values of L = R/1131 (often stated as R/1130) for 60Hz mains rectification, and L = R/942 for 50Hz mains rectification. Additionally, interrupting current to an inductor will cause its magnetic flux to collapse exponentially; as current falls, a voltage spike composed of very high harmonics results which can damage other components of the power supply or circuit. This phenomenon is called flyback voltage.
The complex impedance of a series choke is effectively part of the load impedance, so that lightly loaded circuits have increased ripple (just the opposite of a capacitor input filter). For that reason, a choke input filter is almost always part of an LC filter section, whose ripple reduction is independent of load current. The ripple factor is:
where
In high voltage/low current circuits, a resistor may replace the series choke in an LC filter section (creating an RC filter section). This has the effect of reducing the DC output as well as ripple. The ripple factor is
if RL >> R, which makes an RC filter section practically independent of load
where
is the resistance of the filter resistor
Similarly because of the independence of LC filter sections with respect to load, a reservoir capacitor is also commonly followed by one resulting in a low-pass Π-filter. A Π-filter results in a much lower ripple factor than a capacitor or choke input filter alone. It may be followed by additional LC or RC filter sections to further reduce ripple to a level tolerable by the load. However, use of chokes is deprecated in contemporary designs for economic reasons.
Voltage regulation
A more common solution where good ripple rejection is required is to use a reservoir capacitor to reduce the ripple to something manageable and then pass the current through a voltage regulator circuit. The regulator circuit, as well as providing a stable output voltage, will incidentally filter out nearly all of the ripple as long as the minimum level of the ripple waveform does not go below the voltage being regulated to. Switched-mode power supplies usually include a voltage regulator as part of the circuit.
Voltage regulation is based on a different principle than filtering: it relies on the peak inverse voltage of a diode or series of diodes to set a maximum output voltage; it may also use one or more voltage amplification devices like transistors to boost voltage during sags. Because of the non-linear characteristics of these devices, the output of a regulator is free of ripple. A simple voltage regulator may be made with a series resistor to drop voltage followed by a shunt zener diode whose Peak Inverse Voltage (PIV) sets the maximum output voltage; if voltage rises, the diode shunts away current to maintain regulation.
Effects of ripple
Ripple is undesirable in many electronic applications for a variety of reasons:
ripple represents wasted power that cannot be utilized by a circuit that requires direct current
ripple will cause heating in DC circuit components due to current passing through parasitic elements like ESR of capacitors
in power supplies, ripple voltage requires peak voltage of components to be higher; ripple current requires parasitic elements of components to be lower and dissipation capacity to be higher (components will be bigger, and quality will have to be higher)
transformers that supply ripple current to capacitive input circuits will need to have VA ratings that exceed their load (watt) ratings
The ripple frequency and its harmonics are within the audio band and will therefore be audible on equipment such as radio receivers, equipment for playing recordings and professional studio equipment.
The ripple frequency is within television video bandwidth. Analogue TV receivers will exhibit a pattern of moving wavy lines if too much ripple is present.
The presence of ripple can reduce the resolution of electronic test and measurement instruments. On an oscilloscope it will manifest itself as a visible pattern on screen.
Within digital circuits, it reduces the threshold, as does any form of supply rail noise, at which logic circuits give incorrect outputs and data is corrupted.
Ripple current
Ripple current is a periodic non-sinusoidal waveform derived from an AC power source characterized by high amplitude narrow bandwidth pulses.
The pulses coincide with peak or near peak amplitude of an accompanying sinusoidal voltage waveform.
Ripple current results in increased dissipation in parasitic resistive portions of circuits like ESR of capacitors, DCR of transformers and inductors, internal resistance of storage batteries. The dissipation is proportional to the current squared times resistance (I2R). The RMS value of ripple current can be many times the RMS of the load current.
Frequency-domain ripple
Ripple in the context of the frequency domain refers to the periodic variation in insertion loss with frequency of a filter or some other two-port network. Not all filters exhibit ripple, some have monotonically increasing insertion loss with frequency such as the Butterworth filter. Common classes of filter which exhibit ripple are the Chebyshev filter, inverse Chebyshev filter and the Elliptical filter. The ripple is not usually strictly linearly periodic as can be seen from the example plot. Other examples of networks exhibiting ripple are impedance matching networks that have been designed using Chebyshev polynomials. The ripple of these networks, unlike regular filters, will never reach 0 dB at minimum loss if designed for optimum transmission across the passband as a whole.
The amount of ripple can be traded for other parameters in the filter design. For instance, the rate of roll-off from the passband to the stopband can be increased at the expense of increasing the ripple without increasing the order of the filter (that is, the number of components has stayed the same). On the other hand, the ripple can be reduced by increasing the order of the filter while at the same time maintaining the same rate of roll-off.
See also
Rectifier, a non-linear device that is a principal source of ripple
Dynamo, the instrument of DC power generation, whose output contains a large ripple component
Ringing (signal), the natural response time domain analog of frequency domain ripple
Notes
References
Ryder, J D, Electronic Fundamentals & Applications, Pitman Publishing, 1970.
Millman-Halkias, Integrated Electronics, McGraw-Hill Kogakusha, 1972.
Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964.
Electric power
Filter theory | Ripple (electrical) | [
"Physics",
"Engineering"
] | 3,519 | [
"Telecommunications engineering",
"Physical quantities",
"Filter theory",
"Power (physics)",
"Electric power",
"Electrical engineering"
] |
3,985,322 | https://en.wikipedia.org/wiki/Wave%E2%80%93particle%20duality%20relation | The wave–particle duality relation, also called the Englert–Greenberger–Yasin duality relation, or the Englert–Greenberger relation, relates the visibility, , of interference fringes with the definiteness, or distinguishability, , of the photons' paths in quantum optics. As an inequality:
Although it is treated as a single relation, it actually involves two separate relations, which mathematically look very similar. The first relation, derived by Daniel Greenberger and Allaine Yasin in 1988, is expressed as . It was later extended to, providing an equality for the case of pure quantum states by Gregg Jaeger, Abner Shimony, and Lev Vaidman in 1995. This relation involves correctly guessing which of the two paths the particle would have taken, based on the initial preparation. Here can be called the predictability. A year later Berthold-Georg Englert, in 1996, derived a related relation dealing with experimentally acquiring knowledge of the two paths using an apparatus, as opposed to predicting the path based on initial preparation. This relation is . Here is called the distinguishability.
The significance of the relations is that they express quantitatively the complementarity of wave and particle viewpoints in double-slit experiments. The complementarity principle in quantum mechanics, formulated by Niels Bohr, says that the wave and particle aspects of quantum objects cannot be observed at the same time. The wave–particle duality relations makes Bohr's statement more quantitative – an experiment can yield partial information about the wave and particle aspects of a photon simultaneously, but the more information a particular experiment gives about one, the less it will give about the other. The predictability which expresses the degree of probability with which path of the particle can be correctly guessed, and the distinguishability which is the degree to which one can experimentally acquire information about the path of the particle, are measures of the particle information, while the visibility of the fringes is a measure of the wave information. The relations shows that they are inversely related, as one goes up, the other goes down. Fringes are visible over a wide range of distinguishability.
The mathematics of two-slit diffraction
This section reviews the mathematical formulation of the double-slit experiment. The formulation is in terms of the diffraction and interference of waves. The culmination of the development is a presentation of two numbers that characterizes the visibility of the interference fringes in the experiment, linked together as the Englert–Greenberger duality relation. The next section will discuss the orthodox quantum mechanical interpretation of the duality relation in terms of wave–particle duality.
The wave function in the Young double-aperture experiment can be written as
The function
is the wave function associated with the pinhole at A centered on ; a similar relation holds for pinhole B. The variable is a position in space downstream of the slits. The constants and are proportionality factors for the corresponding wave amplitudes, and is the single hole wave function for an aperture centered on the origin. The single-hole wave-function is taken to be that of Fraunhofer diffraction; the pinhole shape is irrelevant, and the pinholes are considered to be idealized. The wave is taken to have a fixed incident momentum :
where is the radial distance from the pinhole.
To distinguish which pinhole a photon passed through, one needs some measure of the distinguishability between pinholes. Such a measure is given by
where and are the probabilities of finding that the particle passed through aperture A and aperture B respectively.
Since the Born probability measure is given by
and
then we get:
We have in particular for two symmetric holes and for a single aperture (perfect distinguishability). In the far-field of the two pinholes the two waves interfere and produce fringes. The intensity of the interference pattern at a point y in the focal plane is given by
where is the momentum of the particle along the y direction, is a fixed phase shift, and is the separation between the two pinholes. The angle α from the horizontal is given by where is the distance between the aperture screen and the far field analysis plane. If a lens is used to observe the fringes in the rear focal plane, the angle is given by where is the focal length of the lens.
The visibility of the fringes is defined by
where and denote the maximum and minimum intensity of the fringes respectively. By the rules of constructive and destructive interference we have
Equivalently, this can be written as
And hence we get, for a single photon in a pure quantum state, the duality relation
There are two extremal cases with a straightforward intuitive interpretation: In a single hole experiment, the fringe visibility is zero (as there are no fringes). That is, but since we know (by definition) which hole the photon passed through. On the other hand, for a two slit configuration, where the two slits are indistinguishable with , one has perfect visibility with and hence . Hence in both these extremal cases we also have .
The above presentation was limited to a pure quantum state. More generally, for a mixture of quantum states, one will have
For the remainder of the development, we assume the light source is a laser, so that we can assume holds, following from the coherence properties of laser light.
Complementarity
The mathematical discussion presented above does not require quantum mechanics at its heart. In particular, the derivation is essentially valid for waves of any sort. With slight modifications to account for the squaring of amplitudes, the derivation could be applied to, for example, sound waves or water waves in a ripple tank.
For the relation to be a precise formulation of Bohr complementarity, one must introduce wave–particle duality in the discussion. This means one must consider both wave and particle behavior of light on an equal footing. Wave–particle duality implies that one must A) use the unitary evolution of the wave before the observation and B) consider the particle aspect after the detection (this is called the Heisenberg–von Neumann collapse postulate). Indeed, since one could only observe the photon in one point of space (a photon can not be absorbed twice) this implies that the meaning of the wave function is essentially statistical and cannot be confused with a classical wave (such as those that occur in air or water).
In this context the direct observation of a photon in the aperture plane precludes the following recording of the same photon in the focal plane (F). Reciprocally the observation in (F) means that we did not absorb the photon before. If both holes are open this implies that we don't know where we would have detected the photon in the aperture plane. defines thus the predictability of the two holes A and B.
A maximal value of predictability means that only one hole (say A) is open. If now we detect the photon at (F), we know that that photon would have been detected in A necessarily. Conversely, means that both holes are open and play a symmetric role. If we detect the photon at (F), we don't know where the photon would have been detected in the aperture plane and characterizes our ignorance.
Similarly, if then and this means that a statistical accumulation of photons at (F) builds up an interference
pattern with maximal visibility. Conversely, implies and thus, no fringes appear after a statistical recording of several photons.
The above treatment formalizes wave particle duality for the double-slit experiment.
See also
Afshar experiment
Quantum entanglement
Quantum indeterminacy
References and notes
Further reading
Demonstrates that quantum interference effects are destroyed by irreversible object-apparatus correlations ("measurement"), not by Heisenberg's uncertainty principle itself. See also
External links
Quantum optics | Wave–particle duality relation | [
"Physics"
] | 1,605 | [
"Quantum optics",
"Quantum mechanics"
] |
3,986,055 | https://en.wikipedia.org/wiki/Unimate | Unimate was the first industrial robot,
which worked on a General Motors assembly line at the Inland Fisher Guide Plant in Ewing Township, New Jersey, in 1961. There were in fact a family of robots.
History
It was invented by George Devol in the 1950s using his original patent filed in 1954 and granted in 1961 (). The patent is titled "Programmed Article Transfer" (PAT) and begins:
The present invention relates to the automatic operation of machinery, particularly the handling apparatus, and to automatic control apparatus suited for such machinery.
Devol, together with Joseph Engelberger, his business associate, started the world's first robot manufacturing company, Unimation. Devol's background wasn't in academia, but in engineering and mechanics, and previously worked on optical sound recording for film and high-speed printing using magnetic sensing and recording. Engelberger's ultimate goal was to create mechanical workers to replace humans in factories.
The machine weighed 4000 pounds and undertook the job of transporting die castings from an assembly line and welding these parts on auto bodies, a dangerous task for workers, who might be poisoned by toxic fumes or lose a limb if they were not careful.
The original Unimate consisted of a large computer-like box, joined to another box and was connected to an arm, with systematic tasks stored in a drum memory.
In 2003 the Unimate was inducted into the Robot Hall of Fame.
Technology
Unimate
The 1961 Unimate installed at a General Motors factory differed significantly from George Devol's 1954 patented design. The Unimate was a hydraulically actuated programmable manipulator arm with 5 degrees of freedom. This contrasted with the simpler three-prismatic-link pick-and-place arm described in Devol's "Programmed Article Transfer" (PAT) patent.
Devol's patent
Devol's earlier methodology, involving the conversion of analog information into electrical signals, formed the basis for subsequent patents. The patent proposed a cost-effective, general-purpose article-handling machine for diverse industrial tasks, with programmable motions, including gripping mechanisms. It would have a wheeled chassis on rails, a base unit housing the movement-recording program drum, an elevator for vertical arm translation, a telescoping arm with a transfer head and gripper, and a three-dimensional position-sensing system using encoders and sensing heads.
The position-sensing system (proprioception) had two versions: one using notched metal strips and ferrous material detection, the other, a magnetized plate with polarity-based sensing and recording. Similarly, the program drum had two forms: a malleable metal sheet with mechanically deformed bulges, and a solid magnetizable drum. Both drum types used corresponding reading mechanisms.
The robot could be programmed by manually moving the gripper, recording the location on the program drum, then it could perform the same motion, an early form of imitation learning. This resulted in point-to-point movement, a standard feature in modern robotic arms. The magnetizable drum also allowed recording continuous movements along curved paths, synchronized with a timing reference for playback.
The fixed encoder array on the base unit served as a location index for recording, enabling deceleration near programmed positions and self-correction during operation.
In popular culture
The Unimate appeared on The Tonight Show hosted by Johnny Carson on which it knocked a golf ball into a cup, poured a beer, waved the orchestra conductor's baton and grasped an accordion and waved it around.
Fictional robots called Unimate, designed by the character Alan von Neumann, Jr., appeared in comic books from DC Comics.
References
External links
Electronic robot 'Unimate' works in a building in Connecticut, United States. Newsreel footage
Industrial robots
Historical robots
1956 robots
Robotics at Unimation | Unimate | [
"Engineering"
] | 779 | [
"Industrial robots"
] |
3,986,613 | https://en.wikipedia.org/wiki/Perfect%20phylogeny | Perfect phylogeny is a term used in computational phylogenetics to denote a phylogenetic tree in which all internal nodes may be labeled such that all characters evolve down the tree without homoplasy. That is, characteristics do not hold to evolutionary convergence, and do not have analogous structures. Statistically, this can be represented as an ancestor having state "0" in all characteristics where 0 represents a lack of that characteristic. Each of these characteristics changes from 0 to 1 exactly once and never reverts to state 0. It is rare that actual data adheres to the concept of perfect phylogeny.
Building
In general there are two different data types that are used in the construction of a phylogenetic tree. In distance-based computations a phylogenetic tree is created by analyzing relationships among the distance between species and the edge lengths of a corresponding tree. Using a character-based approach employs character states across species as an input in an attempt to find the most "perfect" phylogenetic tree.
The statistical components of a perfect phylogenetic tree can best be described as follows:
A perfect phylogeny for an n x m character state matrix M is a rooted tree T with n leaves satisfying:
i. Each row of M labels exactly one leaf of T
ii. Each column of M labels exactly one edge of T
iii. Every interior edge of T is labeled by at least one column of M
iv. The characters associated with the edges along the unique path from root to a leaf v exactly specify the character vector of v, i.e. the character vector has a 1 entry in all columns corresponding to characters associated to path edges and a 0 entry otherwise.
It is worth noting that it is very rare to find actual phylogenetic data that adheres to the concepts and limitations detailed here. Therefore, it is often the case that researchers are forced to compromise by developing trees that simply try to minimize homoplasy, finding a maximum-cardinality set of compatible characters, or constructing phylogenies that match as closely as possible to the partitions implied by the characters.
Example
Both of these data sets illustrate examples of character state matrices. Using matrix M'1 one is able to observe that the resulting phylogenetic tree can be created such that each of the characters label exactly one edge of the tree. In contrast, when observing matrix M'2, one can see that there is no way to set up the phylogenetic tree such that each character labels only one edge length. If the samples come from variant allelic frequency (VAF) data of a population of cells under study, the entries in the character matrix are frequencies of mutations, and take a value between 0 and 1. Namely, if represents a position in the genome, then the entry corresponding to and sample will hold the frequencies of genomes in sample with a mutation in position .
Usage
Perfect phylogeny is a theoretical framework that can also be used in more practical methods. One such example is that of Incomplete Directed Perfect Phylogeny. This concept involves utilizing perfect phylogenies with real, and therefore incomplete and imperfect, datasets. Such a method utilizes SINEs to determine evolutionary similarity. These Short Interspersed Elements are present across many genomes and can be identified by their flanking sequences. SINEs provide information on the inheritance of certain traits across different species. Unfortunately, if a SINE is missing it is difficult to know whether those SINEs were present prior to the deletion. By utilizing algorithms derived from perfect phylogeny data we are able to attempt to reconstruct a phylogenetic tree in spite of these limitations.
Perfect phylogeny is also used in the construction of haplotype maps. By utilizing the concepts and algorithms described in perfect phylogeny one can determine information regarding missing and unavailable haplotype data. By assuming that the set of haplotypes that result from genotype mapping corresponds and adheres to the concept of perfect phylogeny (as well as other assumptions such as perfect Mendelian inheritance and the fact that there is only one mutation per SNP), one is able to infer missing haplotype data.
Inferring a phylogeny from noisy VAF data under the PPM is a hard problem. Most inference tools include some heuristic step to make inference computationally tractable. Examples of tools that infer phylogenies from noisy VAF data include AncesTree, Canopy, CITUP, EXACT, and PhyloWGS. In particular, EXACT performs exact inference by using GPUs to compute a posterior probability on all possible trees for small size problems. Extensions to the PPM have been made with accompanying tools. For example, tools such as MEDICC, TuMult, and FISHtrees allow the number of copies of a given genetic element, or ploidy, to both increase, or decrease, thus effectively allowing the removal of mutations.
See also
List of phylogenetics software
References
External links
One of several programs available for analysis and creation of phylogenetic trees
Another such program for phylogenetic tree analysis
Additional program for tree analysis
A paper detailing an example of how perfect phylogeny can be utilized outside of the field of genetics, as in language association
Github for "Algorithm for clonal tree reconstruction from multi-sample cancer sequencing data" (AncesTree)
Github for "Accessing Intra-Tumor Heterogeneity and Tracking Longitudinal and Spatial Clonal Evolutionary History by Next-Generation Sequencing" (Canopy)
Github for "Clonality Inference in Tumors Using Phylogeny" (CITUP)
Github for "Exact inference under the perfect phylogeny model" (EXACT)
Github for "Reconstructing subclonal composition and evolution from whole-genome sequencing of tumors" (PhyloWGS)
Computational phylogenetics | Perfect phylogeny | [
"Biology"
] | 1,184 | [
"Bioinformatics",
"Phylogenetics",
"Computational phylogenetics",
"Genetics techniques"
] |
3,988,708 | https://en.wikipedia.org/wiki/Arrhenotoky | Arrhenotoky (from Greek ἄρρην árrhēn "male" and τόκος tókos "birth"), also known as arrhenotokous parthenogenesis, is a form of parthenogenesis in which unfertilized eggs develop into males. In most cases, parthenogenesis produces exclusively female offspring, hence the distinction.
Overview
The set of processes included under the term arrhenotoky depends on the author: arrhenotoky may be restricted to the production of males that are haploid (haplodiploidy); may include diploid males that permanently inactivate one set of chromosomes (parahaploidy); or may be used to cover all cases of males being produced by parthenogenesis (including such cases as aphids, where the males are XO diploids). The form of parthenogenesis in which females develop from unfertilized eggs is known as thelytoky; when both males and females develop from unfertilized eggs, the term "deuterotoky" is used.
In the most commonly used sense of the term, arrhenotoky is synonymous with haploid arrhenotoky or haplodiploidy: the production of haploid males from unfertilized eggs in insects having a haplodiploid sex-determination system. Males are produced parthenogenetically, while diploid females are usually produced biparentally from fertilized eggs. In a similar phenomenon, parthenogenetic diploid eggs develop into males by converting one set of their chromosomes to heterochromatin, thereby inactivating those chromosomes. This is referred to as diploid arrhenotoky or parahaploidy.
Arrhenotoky occurs in members of the insect order Hymenoptera (bees, ants, and wasps) and the Thysanoptera (thrips). The system also occurs sporadically in some spider mites, Hemiptera, Coleoptera (bark beetles), Scorpiones (Tityus metuendus) and rotifers.
See also
Apomixis
Genomic imprinting
Pseudo-arrhenotoky
Thelytoky
Notes
References
Reproduction in animals
Asexual reproduction
Sex-determination systems | Arrhenotoky | [
"Biology"
] | 486 | [
"Reproduction in animals",
"Behavior",
"Reproduction",
"Sex-determination systems",
"Sex",
"Asexual reproduction"
] |
3,990,198 | https://en.wikipedia.org/wiki/Boiler%20feedwater%20pump | A boiler feedwater pump is a specific type of pump used to pump feedwater into a steam boiler. The water may be freshly supplied or returning condensate produced as a result of the condensation of the steam produced by the boiler. These pumps are normally high pressure units that take suction from a condensate return system and can be of the centrifugal pump type or positive displacement type.
Construction and operation
Feedwater pumps range in size up to many kilowatts and the electric motor is usually separated from the pump body by some form of mechanical coupling. Large industrial condensate pumps may also serve as the feedwater pump. In either case, to force the water into the boiler, the pump must generate sufficient pressure to overcome the steam pressure developed by the boiler. This is usually accomplished through the use of a centrifugal pump.
Another common form of feedwater pump runs constantly and is provided with a minimum flow device to stop overpressuring the pump on low flows. The minimum flow usually returns to the tank or deaerator.
Failure of mechanical seal
Mechanical seals of boiler feedwater pumps often show signs of electrical corrosion. The relative movement between the sliding ring and the stationary ring provokes static charging which is not diverted due to the very low conductivity of the boiler water (below one micro-Siemens per cm [μS/cm]). Within short periods of operation – in some cases, only a few hundred operational hours – pieces having the size of fingertips break off from the sliding and/or the stationary ring and cause rapid increases in leakage current. Diamond-coated (DLC) mechanical seals avoid this problem and extend durability remarkably.
Steam-powered pumps
Steam locomotives and the steam engines used on ships and stationary applications such as power plants also require feedwater pumps. In this situation, though, the pump was often powered using a small steam engine that ran using the steam produced by the boiler. A means had to be provided, of course, to put the initial charge of water into the boiler (before steam power was available to operate the steam-powered feedwater pump). The pump was often a positive displacement pump that had steam valves and cylinders at one end and feedwater cylinders at the other end; no crankshaft was required.
Duplex steam pump
A duplex steam pump has two sets of steam and water cylinders. They are not physically connected but the steam valves on the first pump are operated by the movement of the second pump's piston rod, and vice versa. The result is that there are no "dead spots" and the pump is always self-starting.
Jet injector
An injector pump uses the Venturi effect and steam condensation to deliver water to a boiler.
See also
Boiler feedwater
References
Fluid dynamics
Pumps
Steam boiler components
Boilers | Boiler feedwater pump | [
"Physics",
"Chemistry",
"Engineering"
] | 573 | [
"Pumps",
"Turbomachinery",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Piping",
"Boilers",
"Pressure vessels",
"Fluid dynamics"
] |
38,824,010 | https://en.wikipedia.org/wiki/Toxicological%20Sciences | Toxicological Sciences is a monthly peer-reviewed scientific journal which covers all aspects of research on toxicology. It is published by Oxford University Press on behalf of the Society of Toxicology. It was established in 1981 as Fundamental and Applied Toxicology and obtained its current name in 1998. The current editor-in-chief is Jeffrey M. Peters, a professor of molecular toxicology and carcinogenesis at The Pennsylvania State University, and the Managing Editor is Virginia F. Hawkins. The editorial staff also includes Associate Editors in subject areas and an editorial board of topic experts. While its ISO 4 abbreviation is Toxicol. Sci. it is commonly referred to as ToxSci.
Abstracting and indexing
The journal is abstracted and indexed in Biological Abstracts, BIOSIS, CAB International, Chemical Abstracts Service, Current Contents, EMBASE, Health & Safety Science Abstracts, Science Citation Index, and Toxicology Abstracts.
According to the Journal Citation Reports, the journal has a 2016 impact factor of 4.081, ranking it 11th out of 92 journals in the category "Toxicology".
References
External links
Monthly journals
Toxicology journals
Oxford University Press academic journals
English-language journals
Academic journals established in 1981
Hybrid open access journals | Toxicological Sciences | [
"Environmental_science"
] | 245 | [
"Toxicology journals",
"Toxicology"
] |
37,384,533 | https://en.wikipedia.org/wiki/IEEE%20P1906.1 | The IEEE P1906.1 - Recommended Practice for Nanoscale and Molecular Communication Framework is a standards working group sponsored by the IEEE Communications Society Standards Development Board whose goal is to develop a common framework for nanoscale and molecular communication. Because this is an emerging technology, the standard is designed to encourage innovation by reaching consensus on a common definition, terminology, framework, goals, metrics, and use-cases that encourage innovation and enable the technology to advance at a faster rate. The draft passed an initial sponsor balloting with comments on January 2, 2015. The comments were addressed by the working group and the resulting draft ballot passed again on August 17, 2015. Finally, additional material regarding SBML was contributed and the final draft passed again on October 15, 2015. The draft standard was approved by IEEE RevCom in the final quarter of 2015.
Membership
Working group membership includes experts in industry and academia with strong backgrounds in mathematical modeling, engineering, physics, economics and biological sciences.
Content
Electronic components such as transistors, or electrical/electromagnetic message carriers whose operation is similar at the macroscale and nanoscale are excluded from the definition. A human-engineered, synthetic component must form part of the system because it is important to avoid standardizing nature or physical processes. The definition of communication, particularly in the area of cell-surface interactions as viewed by biologists versus non-biologists has been a topic of debate. The interface is viewed as a communication channel, whereas the 'receptor-signaling-gene expression' events are the network.
The draft currently comprises: definition, terminology, framework, metrics, use-cases, and reference code (ns-3).
The standard provides a very broad foundation and encompasses all approaches to nanoscale communication. While there have been many superficial academic attempts to classify nanoscale communication approaches, the standard considers two fundamental approaches: waves and particles. This includes any hybrid of the two as well as quasiparticles.
A unique contribution of the standard is an ns-3 reference model that enables users to build upon the standard components.
Definition
A precise definition of nanoscale networking
Academic and industrial researchers have been playing with the concept of nanoscale communication networks, but without a common, well-defined, and precise definition. The IEEE P1906.1 working group has adopted the definitive specification for a nanoscale communication network. The draft standard sets the context of communication within length scales by defining communication length scales ranging from the Planck length scale to relativistic length scales. A focus is upon the progression of physical changes that impact communication as length scale is reduced.
Terminology
Common terminology for nanoscale networking
Nanoscale communication networking is a highly interdisciplinary endeavor. A clear, common language is required so that interdisciplinary researchers can work smoothly together and minimize cross-discipline misunderstanding due to the common definitions that are defined differently in different fields. The P1906.1 working group has reached consensus on common definitions unique to nanoscale communication networks.
Framework
A framework for ad hoc nanoscale networking
There is a pressing need for a conceptual model of nanoscale networks. A standardized platform for nanoscale communication network simulation is needed. Researchers are developing simulation models and packages for components related to nanoscale communication networks; however the simulation components are not interoperable, even at a conceptual level. The IEEE P1906.1 working group has adopted a nanoscale communication framework that addresses this need. The result of the framework is known as the standard model.
Metrics
Metrics and analytical model are in development
The working group is currently in the process of developing metrics to uniquely characterize nanoscale communication networks. Twenty metrics have been defined:
Message Deliverability
Message Lifetime
Information Density
Bandwidth-Delay Product
Information and Communication Energy
Collision Behavior
Mass Displacement
Positioning Accuracy of Message Carriers
Persistence Length
Diffusive Flux
Langevin Noise
Specificity
Affinity
Sensitivity
Angular (Angle-of-Arrival) Spectrum
Delay (Time-of-Arrival) Spectrum
Active Network Programmability
Perturbation Rate
Supersystem Degradation
Bandwidth-Volume Ratio
Use-Cases
Specific example applications of the standard
Specific use-cases of nanoscale communication implemented using the P1906.1 definition and framework are provided. A standard mapping between a use-case, or implementation, and the standard model of the framework allows a brief summary of the information required about a use-case to understand its relevance to a nanoscale communication network.
Reference model
Reference code to model the recommended practice is in development
Ns-3 reference code is currently in development that implements the developing IEEE P1906.1 recommended practice. The communication framework conceived by the P1906.1 working group has been implemented. A simple example highlighting the interaction and the role of each component in electromagnetic-based, diffusion-based, and molecular motor-based communication at the nanoscale has been developed.
Applications
Applications are numerous, however, there appears to be strong emphasis on medical and biological use-cases in nanomedicine.
Simulation software
The IEEE P1906.1 working group is developing ns-3 nanoscale simulation software that implements the IEEE 1906.1 standard and serves as a reference model and base for development of a wide-variety of interoperable small-scale communication physical layer models.
Literature review
The Best Readings on nanoscale communication networks provides good background information related to the standard. The Topics section breaks down the information using the standard approach.
Building on IEEE 1906.1
IEEE 1906.1 is the foundation for nanoscale communication. Additional standards are expected to build upon it.
IEEE 1906.1.1 Standard Data Model for Nanoscale Communication Systems
The Standard Data Model for Nanoscale Communication Systems defines a network management and configuration data model for nanoscale communication.
This data model has several goals:
Ensure compliance with IEEE 1906.1-2015
Describe the essence of nanoscale communication
Capture fundamental physics of IEEE 1906.1-2015
Define configuration and management of simulation and experimental systems
Provide a self-describing data structure for experimental data.
The data model is written in YANG and will enable remote configuration and operation of nanoscale communication over the Internet using NETCONF.
Notes
References
IEEE P1906.1 - Recommended Practice for Nanoscale and Molecular Communication Framework Project Application Request (PAR).
External links
IEEE P1906.1 - Recommended Practice for Nanoscale and Molecular Communication Framework
Nanotechnology
Computer networking
IEEE standards
Telecommunications engineering | IEEE P1906.1 | [
"Materials_science",
"Technology",
"Engineering"
] | 1,314 | [
"Computer networking",
"Telecommunications engineering",
"Computer engineering",
"Computer standards",
"Materials science",
"Computer science",
"Nanotechnology",
"Electrical engineering",
"IEEE standards"
] |
37,386,981 | https://en.wikipedia.org/wiki/Phyllis%20Clinch | Phyllis E. M. Clinch (12 September 1901 – 19 October 1984) was an Irish botanist most recognised for her work in the field of plant viruses.
Clinch attained her undergraduate degree from University College Dublin in 1923 with a first class honours in botany and chemistry. She was then awarded a scholarship and continued to study under Joseph Doyle at University College Dublin obtaining a master's in 1924. She went on to study a PhD in plant physiology, specializing in the biochemistry of Coniferales. In 1929 she became a research assistant for the investigation of plant virus diseases, department of plant pathology, University College, Dublin. In 1942, she served as an elected member of the Royal Dublin Society scientific council, and later she served as part of the council and then vice president. She retired in 1971.
Early life
Phyllis Clinch was born on 12 September 1901 in Rathgar, Dublin, to James and Mary Clinch. She was their fourth daughter. She grew up in Rathmines, Dublin with her family.
Education
As a child she attended the Loreto school. She continued her studies at the University College Dublin (UCD) where she studied from 1919 to 1923 and graduated with first-class honors degrees in chemistry and botany. She was at the top of her class, and she received a post-grad scholarship. In 1924 she received her MSc degree through her thesis work in addition to being awarded a research fellowship from the Dublin City Council in 1925. Then, in 1928 she got her PhD by studying the metabolism of conifer leaves. She did the work for her PhD at the Imperial College in London while studying under V. H. Blackman. Throughout the completion of her research (from 1926 to 1928) she published five papers with Joseph Doyle, her supervisor. She then worked with Alexandre Guilliermond studying cytology (the function and structure of cells) from 1928 to 1929 at the Sorbonne in Paris.
Career
In 1929 she became a research assistant for the UCD department of plant pathology. Starting in 1929 she also joined a group at Albert Agricultural College doing research on plant viruses. She published this research, with her colleagues J. B. Loughnane and Paul A. Murphy, in Scientific Proceedings for the Royal Dublin Society, a scientific journal. They published a series of nine papers from 1932 to 1949. Clinch was also published in Nature, an internationally known scientific journal, and Éire, a journal from the Department of Agriculture in Ireland. Then in 1949 she was made a lecturer for the botany department at UCD, and in 1961 she took over for Joseph Doyle as a professor in the department. This made her the first woman professor of botany at the university. Clinch had worked with Doyle to plan the move of the college's botany department to the new campus, and she oversaw the move of the department in 1964. She retired from UCD in 1971.
In 1942 was elected to be a member of the scientific committee of RDS (Royal Dublin Society). She served on the council from 1973 to 1977, and she was the vice-president from 1975 to 1977. In 1949 she was one of the first women to be elected to the RIA (the Royal Irish Academy). Three other women were elected at the same time as her. She was elected to the RIA council in 1973, where she served from 1973 to 1977, and she served as vice-president from 1975 to 1977.
Awards and recognition
In 1943, Clinch was awarded an honorary Doctorate of Science (D.Sc.), based on the strength of her published work.
On 30 March 1961 she became the first woman to receive the Boyle Medal from the RDS (Royal Dublin Society). The next woman to receive the award was Margaret Murnane, a physicist, who was awarded 50 years later. Clinch received the award in recognition of her publications and scientific work.
In 2016 portraits of the first four women elected to the RIA, including Phyllis Clinch, were hung at the RIA's Academy House. This is the first time that women's portraits had been hung at the Academy House since the RIA was founded 230 years earlier. The portraits were created by Vera Klute, and relatives of the women were present as the portraits were unveiled. The portraits were a result of the Women on the Walls campaign, created by Accenture, which works to promote the careers of women in STEM.
Notable work
In the 1930s Clinch gained international fame for the work that she did to reveal complex viruses in potatoes. She identified symptomless viruses and viruses that damaged potato stocks. Her findings about viruses were then used by the Department of Agriculture to develop potatoes free of diseases for farmers which was beneficial to the Irish potato industry. She later focused on pathogens in sugar beets in addition to identifying six viruses in tomatoes. The Department of Agriculture asked her to identify the agents of disease in tomatoes and sugar beets so that recommendations could be made on how to control the agents.
Miscellanea
According to nephew Paul Clinch, in a brief biosketch by consulting company Accenture, generations of students referred to Clinch by the respectful nickname "Auntie Phyll."
References
1901 births
1984 deaths
20th-century Irish botanists
20th-century Irish women scientists
Academics of University College Dublin
Alumni of University College Dublin
Irish biochemists
Scientists from Dublin (city)
Women biochemists
Irish women botanists | Phyllis Clinch | [
"Chemistry"
] | 1,085 | [
"Biochemists",
"Women biochemists"
] |
37,387,014 | https://en.wikipedia.org/wiki/Voclosporin | Voclosporin, sold under the brand name Lupkynis, is a calcineurin inhibitor used as an immunosuppressant medication for the treatment of lupus nephritis. It is an analog of ciclosporin that has enhanced action against calcineurin and greater metabolic stability.
It was approved for medical use in the United States in January 2021, and in the European Union in September 2022. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Chemistry
Chemically, cyclosporine, a frequently prescribed calcineurin inhibitor, is the source of voclosporin. However, structural changes have been made to voclosporin in order to increase its effectiveness, metabolic stability, and safety. Voclosporin and cyclophilin A combine to produce a heterodimeric complex that binds to and inhibits calcineurin, a calcium-dependent phosphatase implicated in cytokine generation and T-cell activation. According to X-ray crystallography, the sidechain modification at amino acid 1 in voclosporin changes how the cyclophilin-voclosporin complex binds to a surface composed of catalytic and regulatory subunits in calcineurin (the "latch region"); this change in binding results in potent inhibition of calcineurin.The major site for voclosporin metabolism is also moved to amino acid 9, where the resultant IM9 metabolite, which is nearly eight times less powerful than voclosporin, accounts for 16.7% of all drug-related exposure. Contrarily, cyclosporine undergoes extensive metabolism to produce a number of metabolites, including AM1 and AM19, the production of which is greater than that of IM9; in transplant patients, AM1 concentrations and total exposure levels are higher than or comparable to those of cyclosporine, and both AM1 and AM19 have been linked to nephropathy (nephrotoxicity). There should be little competitive inhibition of the parent drug by its less active metabolite due to the low metabolite load associated with voclosporin.
Medical uses
Lupus nephritis is a common form of glomerular nephritis occurring in patients with systemic lupus nephritis. Lupus nephritis commonly leads patients to chronic kidney failure and therefore places an emphasis on early intervention for improving treatment outcomes. It is a significant risk factor for morbidity and mortality in systemic lupus erythematosus. The management of lupus nephritis comprises immunosuppressive therapy to lessen inflammation and maintain renal function. Guidelines for managing lupus nephritis are provided by the Kidney Disease Improving Global Outcomes (KDIGO) and the European League Against Rheumatism/European Renal Association-European Dialysis and Transplant Association (EULAR/ERA-EDTA). Both sets of recommendations place a strong emphasis on the necessity of lowering proteinuria, a crucial indicator of long-term renal success, and achieving complete renal response (CRR) as treatment objectives.
Early intervention with voclosporin in combination with kidney response is believed to lead to more positive clinical outcomes for lupus nephritis patients. Thus, voclosporin is used in combination with background immunosuppressive regimen for the treatment of lupus nephritis. Safety has not been established in combination with cyclophosphamide.
Since decreasing proteinuria in the first year of treatment of lupus nephritis is known to be associated with improved long-term outcomes, the available clinical data support the use of voclosporin as first-line therapy in combination with mycophenolate mofetil and low-dose glucocorticosteroid.
Contraindications
Patients who are breastfeeding or plan to breastfeed should not take this medication as it may cause fetal harm. Voclosporin is not recomnended in patients with a baseline eGFR less than or equal to 45 ml/min/1.73 m2 unless benefits exceeds risk. Dose should be reduced if the drug is used within this population as well as for patients who are hepatically impaired. Avoid the use of live attenuated vaccines when patients are on this medication. Avoid co-administration of voclosporin and other moderate to strong CYP3A4 inhibitors and if needed then reduce the dose of voclosporin. Dosages of PgP-substrate drugs should be reduced if co-administered with voclosporin.
Adverse effects
Voclosporin has a boxed warning for malignancies and serious infections. Patients taking Voclosporin along with other immunosuppressants have an increased risk for developing malignancies and serious infections that may lead to hospitalization or death. The most common adverse reactions of voclosporin were (>3%), glomerular filtration rate decreased, hypertension, diarrhea, headache, anemia, cough, urinary tract infection, abdominal pain(upper), dyspepsia, alopecia, renal impairment, abdominal pain, mouth ulceration, fatigue. tremor, acute kidney injury, and decreased appetite.
Glomerular filtration rate decrease was the most frequently reported adverse reaction, reported in placebo (11.3 per 100 patient-years), LUPKYNIS 23.7 mg (37.1 per 100 patient-years), and voclosporin 39.5 mg BID (48.7 per 100 patient-years). With LUPKYNIS 23.7 mg BID, decreases in glomerular filtration rate occurred within the first 3 months in 71% of patients, with 78% of those resolved or improved following dose modification, and of those 64% resolved or improved within 1 month. Decreases in glomerular filtration rate resulted in permanent discontinuation of LUPKYNIS in 14% of patients and resolved in 40% within 3 months after treatment discontinuation.
Pharmacology
Voclosporin is a cyclosporin A analog, similar to cyclosporin A with modifications on an amino acid within one region that allows the drug to bind to Calcineurin. Voclosporin inhibits calcineurin, which then blocks the production of IL-2 and T-cell mediated immune responses. As a result of the calcineurin inhibition, podocytes (cells within the kidneys) are stabilized while inflammation is reduced. Reduction of inflammation within the kidneys prevents further renal damage.
Calcineurin inhibitors in lupus nephritis have two separate impacts on calcineurin activity: immunomodulatory effects on T-cells and stabilization of the podocyte. Inhibition of calcineurin in T cells prevents nuclear factor of activated T cells (NFAT) from moving to the nucleus, which reduces the transcription of genes encoding inflammatory cytokines. This decreases lymphocyte proliferation and T-cell mediated responses. By preventing the dephosphorylation of synaptopodin in the podocyte, calcineurin inhibition preserves the cytoskeleton's stabilizing function and lowers proteinuria. Up to 1 mg/kg, voclosporin inhibits calcineurin in a dose-dependent manner with little to no lag time from the time reaching the maximum drug concentration to the time reaching maximum calcineurin inhibition. Voclosporin has been demonstrated to be a strong inhibitor of numerous immunological processes, such as lymphocyte proliferation, T-cell cytokine generation, and T-cell surface antigen expression, in in vitro tests utilizing blood from nonhuman primates. Voclosporin similarly suppressed a variety of T-cell activities in non-human primates in vivo. In these nonclinical investigations, voclosporin was more effective than cyclosporine.,
Pharmacokinetics and metabolism
When administered on an empty stomach, the median Tmax of voclosporin is 1.5 hours. The AUC is estimated to be 7693 ng/mL and the Cmax is estimated at 955 ng/mL. The volume of distribution is approximately 2,154 L and distributes within the red blood cells. The distribution between the plasma and whole blood is affected by temperature and concentration. The protein binding of voclosporin is 97%. The average terminal half-life of voclosporin is 63.6 L/h. The drug is mainly metabolized by the CYP3A4 hepatic cytochrome enzyme. Pharmacologic activity is mainly attributed to the parent molecule itself, with the major metabolite being 8-fold less potent than the parent drug. Exposure is increased in individuals with severe renal impairment (creatinine clearance [CrCL] <30 mL/min) and in those with mild or moderate hepatic impairment. population pharmacokinetic analysis in patients with lupus nephritis showed voclosporin to have predictable pharmacokinetics, with no clinically meaningful influence of sex, body weight, age, serum albumin, total bilirubin or eGFR (>45 mL/min/1.73 m2; patients with eGFR ≤45 mL/min/1.73 m2 were excluded from voclosporin clinical trials). Dose adjustment is not required based on these covariates.
Safety
Since lupus nephritis is a serious, disabling, and possibly life-threatening illness, it is not surprising to see mortality in lupus nephritis clinical trials. Voclosporin safety information originates from a total of 267 patients who received 23.7 mg BID and an additional 88 patients who received 39.5 mg BID. The majority of adverse event categories were more prevalent in voclosporin-treated patients than in placebo-treated patients. Although similar to earlier lupus nephritis trials, the mortality rate (4.9%) was higher in the low-dose voclosporin group; it was noted that the majority of the deaths took place at a small number of locations, and that more patients were randomized to low-dose voclosporin at these locations. A total of 19 patients (3.0%) died across the three published clinical trials with voclosporin in a total of 631 patients with lupus nephritis, all of which had a treatment duration of about a year. This included any deaths that occurred after randomization up to study conclusion. Overall, patients receiving any dose of voclosporin, including 23.7 mg BID and 39.5 mg BID, and patients receiving placebo (6 of 266 patients, 2.3%) experienced a similar rate of death (13 of 365 patients, 3.6%).
Dosage
Up to 64 mg twice daily (BID), according to pharmacokinetic-pharmacodynamic modeling, a significant association was found between voclosporin blood levels and estimated calcineurin inhibition. Based on an integrated safety examination of voclosporin in healthy individuals and patients with various diseases (psoriasis, uveitis, and renal transplant) at doses of 0.2-0.6 mg/kg BID, the therapeutic dose utilized in lupus nephritis was chosen. Weight-based dosing was not required in this dosage range, as shown by the linear pharmacokinetic profile, and there was a dose-dependent rise in the frequency and severity of side events.
History
Voclosporin was discovered by Isotechnika in the 1990s. Isotechnika was founded in 1993 and merged with Aurinia Pharmaceuticals in 2013. Huizinga led the clinical development program for voclosporin before and after the acquisition—including a shift in the synthesis from one that yielded racemates to one that yields a single trans isomer. Voclosporin is the first oral treatment for lupus nephritis to receive approval in the USA. In January 2021, Aurinia Pharmaceuticals received approval from the Food & Drug Administration to sell the drug Lupkynis.
Society and culture
Legal status
On 21 July 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Lupkynis, intended for the treatment of lupus nephritis. The applicant for this medicinal product is Otsuka Pharmaceutical Netherlands B.V. Voclosporin was approved for medical use in the European Union in September 2022.
References
Immunosuppressants
Peptides
Cyclic peptides
X | Voclosporin | [
"Chemistry"
] | 2,726 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
37,389,962 | https://en.wikipedia.org/wiki/Evaporation%20suppressing%20monolayers | Evaporation suppressing monolayers are materials that when applied to the air/water interface, will spread (or self-assemble) and form a thin film across the surface of the water. The purpose of these materials is to reduce evaporative water loss from dams and reservoirs.
Mechanism
The specific mechanism that underlies monolayer evaporation resistance has been attributed to the physical barrier formed by the presence of these materials on the surface of the water (see figure). The extent to which a material can resist the evaporation of water is best treated on a case-by-case basis, however, a number of empirically derived formulas have been reported. Before the advent of surface spectroscopic techniques such as Brewster Angle Microscopy (BAM) and Glancing Incidence X-Ray reflectrometry (GIXD), it was thought that the intermolecular spacing between monolayer molecules was the largest determinant factor governing evaporation suppression. When the surface density of the monolayer was sufficiently small, water molecules were presumed to pass through the space between molecules. Scattering and imaging results, however, revealed that the intermolecular distance between monolayer molecules was essentially constant, and that evaporation was more likely at domain boundaries.
The factors governing the efficacy of an evaporation suppressing monolayer are the ability to remain tightly packed despite changes in surface pressure, the ability to adhere to the surface of water, and to neighbouring molecules.
History
Irving Langmuir accurately described the geometric structure of a monolayer film on water in 1917, work for which he would be later awarded the Nobel prize in chemistry. The evaporation suppressing properties of these materials were first reported by Rideal in the 1920s In the 1940s Langmuir and Schaefer quantified the evaporation resistance and its dependence on temperature. This work was extended by Archer and La Mer in the following decade, who observed a dependence on surface pressure, chain length, monolayer phase, subphase composition and surface temperature. Large scale field trials were being conducted at this time in Australia by Mansfield He reported that the results seen in the laboratory setting could not be replicated in real world conditions, with dust and wind being cited as adversely affecting evaporation suppressing performance.
In later decades, research efforts focussed on multicomponent monolayer materials such as hexadecanol + octadecanol, altering the number of carbons in the aliphatic chain, and later on, the addition of polymerised surfactants to increase monolayer stability.
Better monolayer materials are required as are better methods of monolayer distribution methods, although no resolution of these difficulties presently exists.
Despite research in this area for most of the 20th century, no durable, effective and inexpensive product has come to market. Recently, advances in experimental and modelling techniques have increased the understanding of these materials.
Recent Developments
During a prolonged drought in Australia at the start of this century, scientists there from a number of research institutions, including Pr. David Solomon, inventor of the polymer banknote, set about developing a product that is efficacious, resistant to the deleterious effects of wind, and affordable. In addition to small and large scale field trials, new techniques were utilised including a novel evaporation tank with a controlled climate system to mimic the effects of wind and waves, and computational modelling was for the first time employed to relate dynamic and geometric properties at the atomistic level, with evaporation suppressing performance at the macroscopic level.
The use of ethylene glycol monooctadecyl ether was found to substantially decrease evaporation resistance in the presence of wind, and the addition of a water-soluble polymer further enhanced its effectiveness.
Commercial Products
WaterGuard manufactured by Aquatain advertises a polymer based material that reduces water evaporation.
Other products include Solarpill and Water$aver. The efficacy of these products has not been shown.
See also
Self-assembled monolayer
Monolayers
References
Thin films
Physical chemistry
Articles containing video clips | Evaporation suppressing monolayers | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 828 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan",
"Nanotechnology",
"Planes (geometry)",
"Physical chemistry",
"Thin films"
] |
37,390,382 | https://en.wikipedia.org/wiki/Plectania%20rhytidia | Plectania rhytidia is a species of fungus in the family Sarcosomataceae. Originally described under the name Peziza rhytidia by Miles Joseph Berkeley in 1855, the species was transferred to Plectania by mycologists John Axel Nannfeldt and Richard Korf in 1957.
References
External links
Pezizales
Fungi described in 1855
Fungi of Australia
Fungi of Europe
Fungi of New Zealand
Taxa named by Miles Joseph Berkeley
Fungus species | Plectania rhytidia | [
"Biology"
] | 97 | [
"Fungi",
"Fungus species"
] |
23,207,677 | https://en.wikipedia.org/wiki/Elenolic%20acid | Elenolic acid is a component of olive oil, olive infusion and olive leaf extract. It can be considered as a marker for maturation of olives.
Oleuropein, a chemical compound found in olive leaf from the olive tree, together with other closely related compounds such as 10-hydroxyoleuropein, ligstroside and 10-hydroxyligstroside, are tyrosol esters of elenolic acid.
References
Carboxylic acids
Aldehydes
Dihydropyrans
Methyl esters
Olive oil
Olives
Aldehydic acids
Enones | Elenolic acid | [
"Chemistry"
] | 124 | [
"Carboxylic acids",
"Functional groups"
] |
23,208,230 | https://en.wikipedia.org/wiki/Meter%20data%20management | Meter data management (MDM) refers to software that performs long-term data storage and management for the vast quantities of data delivered by smart metering systems. This data consists primarily of usage data and events that are imported from the head-end servers managing the data collection in advanced metering infrastructure (AMI) or automatic meter reading (AMR) systems. MDM is a component in the smart grid infrastructure promoted by utility companies. This may also incorporate meter data analytics, the analysis of data emitted by electric smart meters that record consumption of electric energy.
MDM systems
An MDM system will typically import the data, then validate, cleanse and process it before making it available for billing and analysis.
Products for meter data include:
Smart meter deployment planning and management;
Meter and network asset monitoring and management;
Automated smart meter provisioning (i.e. addition, deletion and updating of meter information at utility and AMR side) and billing cutover;
Meter-to-cash system, workforce management system, asset management and other systems.
Furthermore, an MDM may provide reporting capabilities for load and demand forecasting, management reports, and customer service metrics.
An MDM provide application programming interfaces (APIs) between the MDM and the multiple destinations that rely on meter data. This is the first step to ensure that consistent processes and 'understanding' get applied to the data. Besides this common functionality, an advanced MDM may provide facility for remote connect/disconnect of meters, power status verification/power restoration verification and on demand read of remote meters.
Data analysis
Smart meters send usage data to the central head end systems as often as every minute from each meter whether installed at a residential or a commercial or an industrial customer. Utility companies sometimes analyze this voluminous data as well as collect it. Some of the reasons for analysis are
to make efficient energy buying decisions based on the usage patterns,
launching energy efficiency or energy rebate programs,
energy theft detection,
comparing and correcting metering service provider performance, and
detecting and reducing unbilled energy.
This data not only helps utility companies make their businesses more efficient, but also helps consumers save money by using less energy at peak times. So, it is both economical and green. Smart meter infrastructure is fairly new to Utilities industry. As utility companies collect more and more data over the years, they may uncover further uses to these detailed smart meter activities. Similar analysis can be applied to water and gas as well as electric usage.
According to a 2012 web posting, data that is required for complete meter data analytics may not reside in the same database. Instead, it might reside in disparate databases among various departments of utility companies.
See also
Automatic meter reading
Advanced metering infrastructure
Smart grid
Smart meter
References
Flow meters
Public services
Data analysis | Meter data management | [
"Chemistry",
"Technology",
"Engineering"
] | 568 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
23,209,317 | https://en.wikipedia.org/wiki/Warm%20core%20ring | A warm core ring is a type of mesoscale eddy which forms and breaks off from an ocean current, such as the Gulf Stream or the Kuroshio Current. The ring is an independent circulatory system of warm water that can persist for several months before losing its distinctive identity. Warm core rings can be detected using infrared satellites or sea height anomalies resulting from and are easily identifiable against the surrounding colder waters. In addition, warm core rings are also distinguished by their low levels of biological activity. This type of system is thought to have helped develop several hurricanes, most notably Hurricane Katrina, into significantly stronger storms due to the abundance of warmer ocean water reaching down to a significant depth, which in turn fuels and intensifies the hurricane. Warm core rings are also known for affecting wildlife, having the capacity to bring wildlife from typically warm waters to areas typically dominated by cold waters.
Formation and Movement
Formation
In general, warm core rings form as a meander of a strong oceanic current. They generally form when a strong meander on an oceanic current creates a "loop" by closing in on the meander, resulting in an independent system.
Movement
Rings will drift to the west-southwest at 3–5 km/day for several months up to a year. The rings always rotate clockwise due to the direction of the Gulf Stream and can reach rotational velocities of up to 1 m/s. Usually warm core rings cannot move onto the continental shelf because they reach deeper than the seafloor on the shelf by over 1000 meters, though they can approach the shelf.
Dissipation
Warm core rings are often reabsorbed by the Gulf Stream, but they can break apart on their own as well if they move onto the continental shelf. The survivability of warm core rings can depend on the region of formation within the Gulf Stream, the season of formation in a year, the latitude of formation, and their proximity to the New England Sea Mount chain.
Detection and Tracking
Warm core rings are easily observed in the Gulf of Mexico or elsewhere through the use of infrared imagery by weather satellites. Since the ocean water temperature of the ring is significantly higher than the surrounding waters, these rings show up easily in infrared images. This, coupled with models of ring movement, allow well-developed tracking of the rings. Because warm core rings include warm water to a significant depth, infrared satellites can differentiate the temperature, unlike cold core rings, which cannot be easily detected due to the rapid warming of waters in a cold core ring. Warm core rings are also detected by sea surface height anomalies. Since warm water takes up more space as it expands than cold water, the large amount of warm water causes an upwelling in sea height which can be detected by buoys.
Adverse Effects
Intensification of Hurricanes
Warm core rings have been linked to the intensification of several hurricanes passing over their location. Because high sea surface temperature as well as warmer water at greater depth is the primary intensifier of a hurricane, warm core rings account for tremendous strengthening of these storms.
Notably, Hurricane Opal passed over a ring and had sudden increases of wind speed from 110 miles per hour to 135 miles per hour shortly before landfall, a trend also seen in Hurricane Allen and Hurricane Camille. There is evidence that Hurricane Katrina and Hurricane Rita, both notable storms which reached Category 5 intensity, as well as Hurricane Ivan, were also strengthened by warm core rings.
Effects on Wildlife
Warm core rings typically include far less biological specimens than the surrounding ocean. When the rings approach continental shelves, coastal currents are affected, which can cause organisms to drift onto the shelf that ordinarily would not be there. In fact, there are human accounts of sea turtles and tropical fish which normally live in much warmer waters coming near the coastal shelf due to the deep, warm waters of a warm core ring.
Damages to Offshore Drilling
Due to currents around warm core rings of up to nearly 5 miles per hour, warm core rings can damage offshore oil platforms and increase the risk of accidents.
Larval Transport
Many fish species’ life cycle involves two distinct habitats. The adults live in warmer temperate waters south of Cape Hatteras, NC while the juveniles are found in estuaries of cooler waters north of Cape Hatteras. Warm Core Rings play an important role in the transport of larvae between the two habitats. Species like the bluefish (Pomatomus saltatrix) and pearly razorfish (Xyrichtys novacula) spawn near the western edge of the Gulf Stream just south of Cape Hatteras. Because of the convergence of the Gulf Stream from the south and cooler coastal water current from the north, most water around Cape Hatteras flows into the Gulf Stream. The larvae released near this convergence is swept into the Gulf Stream and flows north. Since the larvae are planktonic, they don't swim into the center of the Gulf Stream but stay near the western edge. This is beneficial for when warm core rings form. Warm core rings are formed when the crest of a meander breaks off from the Gulf Stream. Any larvae in the crest of the meanders are then entrapped in the warm core ring. Once the warm core ring breaks way, it takes a southwesterly path towards the coast. The interaction between warm core rings and the continental shelf creates a weakening of the ring and enables the larvae to escape and continue their journey to nearby estuaries. The warm core rings formed along the northeastern states can last between 4 and 5 months. During this time the larvae grow so that by the time they reach the estuaries, they are able to swim away from the warm core ring into the estuaries.
See also
References
Weather events
Ocean currents | Warm core ring | [
"Physics",
"Chemistry"
] | 1,157 | [
"Ocean currents",
"Physical phenomena",
"Weather",
"Weather events",
"Fluid dynamics"
] |
23,210,252 | https://en.wikipedia.org/wiki/Nuclear%20power%20proposed%20as%20renewable%20energy | Whether nuclear power should be considered a form of renewable energy is an ongoing subject of debate. Statutory definitions of renewable energy usually exclude many present nuclear energy technologies, with the notable exception of the state of Utah. Dictionary-sourced definitions of renewable energy technologies often omit or explicitly exclude mention of nuclear energy sources, with an exception made for the natural nuclear decay heat generated within the Earth.
The most common fuel used in conventional nuclear fission power stations, uranium-235 is "non-renewable" according to the Energy Information Administration, the organization however is silent on the recycled MOX fuel. The National Renewable Energy Laboratory does not mention nuclear power in its "energy basics" definition.
In 1987, the Brundtland Commission (WCED) classified fission reactors that produce more fissile nuclear fuel than they consume (breeder reactors, and if developed, fusion power) among conventional renewable energy sources, such as solar power and hydropower. The monitoring and storage of radioactive waste products is also required upon the use of other renewable energy sources, such as geothermal energy.
Definitions of renewable energy
Renewable energy flows involve natural phenomena, which with the exception of tidal power, ultimately derive their energy from the sun (a natural fusion reactor) or from geothermal energy, which is heat derived in greatest part from that which is generated in the earth from the decay of radioactive isotopes. Renewable energy resources exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries.
In ISO 13602-1:2002, a renewable resource is defined as "a natural resource for which the ratio of the creation of the natural resource to the output of that resource from nature to the technosphere is equal to or greater than one".
Conventional fission, breeder reactors as renewable
Nuclear fission reactors are a natural energy phenomenon, having naturally formed on earth in times past, for example a natural nuclear fission reactor which ran for thousands of years in present-day Oklo Gabon was discovered in the 1970s. It ran for a few hundred thousand years, averaging 100 kW of thermal power during that time.
Conventional, human manufactured, nuclear fission power stations largely use uranium, a common metal found in seawater, and in rocks all over the world, as its primary source of fuel. Uranium-235 "burnt" in conventional reactors, without fuel recycling, is a non-renewable resource, and if used at present rates would eventually be exhausted.
This is also somewhat similar to the situation with a commonly classified renewable source, geothermal energy, a form of energy derived from the natural nuclear decay of the large, but nonetheless finite supply of uranium, thorium and potassium-40 present within the Earth's crust, and due to the nuclear decay process, this renewable energy source will also eventually run out of fuel. As too will the Sun, and be exhausted.
Nuclear fission involving breeder reactors, a reactor which breeds more fissile fuel than they consume and thereby has a breeding ratio for fissile fuel higher than 1 thus has a stronger case for being considered a renewable resource than conventional fission reactors. Breeder reactors would constantly replenish the available supply of nuclear fuel by converting fertile materials, such as uranium-238 and thorium, into fissile isotopes of plutonium or uranium-233, respectively. Fertile materials are also nonrenewable, but their supply on Earth is extremely large, with a supply timeline greater than geothermal energy. In a closed nuclear fuel cycle utilizing breeder reactors, nuclear fuel could therefore be considered renewable.
In 1983, physicist Bernard Cohen claimed that fast breeder reactors, fueled exclusively by natural uranium extracted from seawater, could supply energy at least as long as the sun's expected remaining lifespan of five billion years. This was based on calculations involving the geological cycles of erosion, subduction, and uplift, leading to humans consuming half of the total uranium in the Earth's crust at an annual usage rate of 6500 tonne/yr, which was enough to produce approximately 10 times the world's 1983 electricity consumption, and would reduce the concentration of uranium in the seas by 25%, resulting in an increase in the price of uranium of less than 25%.
Advancements at Oak Ridge National Laboratory and the University of Alabama, as published in a 2012 issue of the American Chemical Society, towards the extraction of uranium from seawater have focused on increasing the biodegradability of the materials used reducing the projected cost of the metal if it was extracted from the sea on an industrial scale. The researchers' improvements include using electrospun Shrimp shell Chitin mats that are more effective at absorbing uranium when compared to the prior record setting Japanese method of using plastic amidoxime nets. As of 2013 only a few kilograms (picture available) of uranium have been extracted from the ocean in pilot programs and it is also believed that the uranium extracted on an industrial scale from the seawater would constantly be replenished from uranium leached from the ocean floor, maintaining the seawater concentration at a stable level. In 2014, with the advances made in the efficiency of seawater uranium extraction, a paper in the journal of Marine Science & Engineering suggests that with, light water reactors as its target, the process would be economically competitive if implemented on a large scale. In 2016 the global effort in the field of research was the subject of a special issue in the journal of Industrial & Engineering Chemistry Research.
In 1987, the World Commission on Environment and Development (WCED), an organization independent from, but created by, the United Nations, published Our Common Future, in which a particular subset of presently operating nuclear fission technologies, and nuclear fusion were both classified as renewable. That is, fission reactors that produce more fissile fuel than they consume - breeder reactors, and when it is developed, fusion power, are both classified within the same category as conventional renewable energy sources, such as solar and falling water.
Presently, as of 2022, only 2 breeder reactors are producing industrial quantities of electricity, the BN-600 and BN-800. The retired French Phénix reactor also demonstrated a greater than one breeding ratio and operated for ~30 years, producing power when Our Common Future was published in 1987.
To fulfill the conditions required for a nuclear renewable energy concept, one has to explore a combination of processes going from the front end of the nuclear fuel cycle to the fuel production and the energy conversion using specific fluid fuels and reactors, as reported by Degueldre et al. (2019). Extraction of uranium from a diluted fluid ore such as seawater has been studied in various countries worldwide. This extraction should be carried out parsimoniously, as suggested by Degueldre (2017). An extraction rate of kilotons of U per year over centuries would not modify significantly the equilibrium concentration of uranium in the oceans (3.3 ppb). This equilibrium results from the input of 10 kilotons of U per year by river waters and its scavenging on the sea floor from the 1.37 exatons of water in the oceans. For a renewable uranium extraction, the use of a specific biomass material is suggested to adsorb uranium and subsequently other transition metals. The uranium loading on the biomass would be around 100 mg per kg. After contact time, the loaded material would be dried and burned ( neutral) with heat conversion into electricity.e.g. The uranium ‘burning’ in a molten salt fast reactor helps to optimize the energy conversion by burning all actinide isotopes with an excellent yield for producing a maximum amount of thermal energy from fission and converting it into electricity. This optimisation can be reached by reducing the moderation and the fission product concentration in the liquid fuel/coolant. These effects can be achieved by using a maximum amount of actinides and a minimum amount of alkaline/earth alkaline elements yielding a harder neutron spectrum. Under these optimal conditions the consumption of natural uranium would be 7 tons per year and per gigawatt (GW) of produced electricity.e.g. The coupling of uranium extraction from the sea and its optimal utilisation in a molten salt fast reactor should allow nuclear energy to gain the label renewable. In addition, the amount of seawater used by a nuclear power plant to cool the last coolant fluid and the turbine would be ~2.1 giga tons per year for a fast molten salt reactor, corresponding to 7 tons of natural uranium extractable per year. This practice justifies the label renewable.
Fusion fuel supply
If it is developed, fusion power would provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel itself (primarily deuterium) exists abundantly in the Earth's ocean: about 1 in 6500 hydrogen (H) atoms in seawater (H2O) is deuterium in the form of (semi-heavy water). Although this may seem a low proportion (about 0.015%), because nuclear fusion reactions are so much more energetic than chemical combustion and seawater is easier to access and more plentiful than fossil fuels, fusion could potentially supply the world's energy needs for millions of years.
In the deuterium + lithium fusion fuel cycle, 60 million years is the estimated supply lifespan of this fusion power, if it is possible to extract all the lithium from seawater, assuming current (2004) world energy consumption. While in the second easiest fusion power fuel cycle, the deuterium + deuterium burn, assuming all of the deuterium in seawater was extracted and used, there is an estimated 150 billion years of fuel, with this again, assuming current (2004) world energy consumption.
Legislation in the United States
If nuclear power were classified as renewable energy (or as low-carbon energy), additional government support would be available in more jurisdictions, and utilities could include nuclear power in their effort to comply with Renewable portfolio standard (RES).
In 2009, the State of Utah passed the "Renewable Energy Development Act" which in part defined nuclear power as a form of renewable energy. In North Carolina the Senate Bill 678 was passed for fusion energy to be added as a renewable source for clean energy.
See also
Life-cycle greenhouse gas emissions of energy sources
Non-renewable resource#Nuclear fuels
Uranium mining
Nuclear power debate
Nuclear fusion
Pro-nuclear movement
100% renewable energy
References
Nuclear power
Renewable energy | Nuclear power proposed as renewable energy | [
"Physics"
] | 2,118 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
23,214,113 | https://en.wikipedia.org/wiki/Christian%20M%C3%B8ller | Christian Møller (22 December 1904, Hundslev, Als14 January 1980, Ordrup) was a Danish chemist and physicist who made fundamental contributions to the theory of relativity, theory of gravitation and quantum chemistry. He is known for Møller–Plesset perturbation theory and Møller scattering.
His suggestion in 1938 to Otto Frisch that the newly discovered process of nuclear fission might create surplus energy, led Frisch to conceive of the concept of the nuclear chain reaction, leading to the Frisch–Peierls memorandum, which kick-started the development of nuclear energy through the MAUD Committee and the Manhattan Project.
Møller was the director of the European Organization for Nuclear Research (CERN)'s Theoretical Study Group between 1954 and 1957 and later a member of the same organization's Scientific Policy Committee (1959-1972).
Møller tetrad theory of gravitation
In 1961, Møller showed that a tetrad description of gravitational fields allows a more rational treatment of the energy–momentum complex than in a theory based on the metric tensor alone. The advantage of using tetrads as gravitational variables was connected with the fact that this allowed to construct expressions for the energy-momentum complex which had more satisfactory transformation properties than in a purely metric formulation.
Supporting Chandrasekhar
Subrahmanyan Chandrasekhar came up with his theory of the Chandrasekhar limit for the maximum stable mass of a star, in 1930. This calculation was in opposition to Sir Arthur Eddington theories about stars. Eddington mocked Chandrasekhar various times and frequently campaigned against Chandrasekhar during conferences.
In 1935, Møller was the first to write a paper in collaboration with Chandrasekhar to criticise Eddington's theory. They wrote "we are quite unable to follow [Eddington's] arguments." They also proceeded to refute Eddington's follow up paper showing contradictions in his theory.
Books
The world and the atom, London, 1940.
The theory of relativity, Clarendon Press, Oxford, 1972.
A study in gravitational collapse, Kobenhavn : Munksgaard, 1975.
On the crisis in the theory of gravitation and a possible solution, Kobenhavn : Munksgaard, 1978.
Evidence for gravitational theories (ed.), Academic Press, 1963.
Interview with Dr. Christian Moller by Thomas S. Kuhn at Copenhagen July 29, 1963 Oral History Transcript — Dr. Christian Moller
See also
Born rigidity
Proper reference frame (flat spacetime)
References
20th-century Danish physicists
Danish chemists
Quantum physicists
1904 births
1980 deaths
Relativity theorists
People associated with CERN
People from Gentofte Municipality
Members of the Royal Swedish Academy of Sciences | Christian Møller | [
"Physics"
] | 567 | [
"Quantum mechanics",
"Quantum physicists",
"Relativity theorists",
"Theory of relativity"
] |
23,215,561 | https://en.wikipedia.org/wiki/Gram%20per%20cubic%20centimetre | The gram per cubic centimetre is a unit of density in the CGS system, and is commonly used in chemistry. It is defined by dividing the CGS unit of mass, the gram, by the CGS unit of volume, the cubic centimetre. The official SI symbols are g/cm3, g·cm−3, or g cm−3. It is equivalent to the units gram per millilitre (g/mL) and kilogram per litre (kg/L). The density of water is about 1 g/cm3, since the gram was originally defined as the mass of one cubic centimetre of water at its maximum density at .
Conversions
1 g/cm3 is equivalent to:
= 1000 g/L (exactly)
= 1000 kg/m3 (exactly)
≈ (approximately)
≈ (approximately)
1 kg/m3 = 0.001 g/cm3(exactly)
1 lb/cu ft ≈ (approximately)
1 oz/US gal ≈ (approximately)
See also
Kilogram per cubic metre
References
Units of chemical measurement
Units of density
Centimetre–gram–second system of units | Gram per cubic centimetre | [
"Physics",
"Chemistry",
"Mathematics"
] | 232 | [
"Physical quantities",
"Units of density",
"Quantity",
"Chemical quantities",
"Density",
"Units of chemical measurement",
"Units of measurement"
] |
23,216,258 | https://en.wikipedia.org/wiki/Log%20sum%20inequality | The log sum inequality is used for proving theorems in information theory.
Statement
Let and be nonnegative numbers. Denote the sum of all s by and the sum of all s by . The log sum inequality states that
with equality if and only if are equal for all , in other words for all .
(Take to be if and if . These are the limiting values obtained as the relevant number tends to .)
Proof
Notice that after setting we have
where the inequality follows from Jensen's inequality since , , and is convex.
Generalizations
The inequality remains valid for provided that and .
The proof above holds for any function such that is convex, such as all continuous non-decreasing functions. Generalizations to non-decreasing functions other than the logarithm is given in Csiszár, 2004.
Another generalization is due to Dannan, Neff and Thiel, who showed that if and are positive real numbers with and , and , then .
Applications
The log sum inequality can be used to prove inequalities in information theory. Gibbs' inequality states that the Kullback-Leibler divergence is non-negative, and equal to zero precisely if its arguments are equal. One proof uses the log sum inequality.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Proof
|-
|Let and be pmfs. In the log sum inequality, substitute , and to get
with equality if and only if for all i (as both and sum to 1).
|}
The inequality can also prove convexity of Kullback-Leibler divergence.
Notes
References
T.S. Han, K. Kobayashi, Mathematics of information and coding. American Mathematical Society, 2001. .
Information Theory course materials, Utah State University . Retrieved on 2009-06-14.
Inequalities
Information theory
Articles containing proofs | Log sum inequality | [
"Mathematics",
"Technology",
"Engineering"
] | 391 | [
"Telecommunications engineering",
"Applied mathematics",
"Binary relations",
"Computer science",
"Information theory",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
29,320,146 | https://en.wikipedia.org/wiki/Event%20horizon | In astrophysics, an event horizon is a boundary beyond which events cannot affect an outside observer. Wolfgang Rindler coined the term in the 1950s.
In 1784, John Michell proposed that gravity can be strong enough in the vicinity of massive compact objects that even light cannot escape. At that time, the Newtonian theory of gravitation and the so-called corpuscular theory of light were dominant. In these theories, if the escape velocity of the gravitational influence of a massive object exceeds the speed of light, then light originating inside or from it can escape temporarily but will return. In 1958, David Finkelstein used general relativity to introduce a stricter definition of a local black hole event horizon as a boundary beyond which events of any kind cannot affect an outside observer, leading to information and firewall paradoxes, encouraging the re-examination of the concept of local event horizons and the notion of black holes. Several theories were subsequently developed, some with and some without event horizons. One of the leading developers of theories to describe black holes, Stephen Hawking, suggested that an apparent horizon should be used instead of an event horizon, saying, "Gravitational collapse produces apparent horizons but no event horizons." He eventually concluded that "the absence of event horizons means that there are no black holes – in the sense of regimes from which light can't escape to infinity."
Any object approaching the horizon from the observer's side appears to slow down, never quite crossing the horizon. Due to gravitational redshift, its image reddens over time as the object moves closer to the horizon.
In an expanding universe, the speed of expansion reaches — and even exceeds — the speed of light, preventing signals from traveling to some regions. A cosmic event horizon is a real event horizon because it affects all kinds of signals, including gravitational waves, which travel at the speed of light.
More specific horizon types include the related but distinct absolute and apparent horizons found around a black hole. Other distinct types include:
The Cauchy and Killing horizons.
The photon spheres and ergospheres of the Kerr solution.
Particle and cosmological horizons relevant to cosmology.
Isolated and dynamical horizons, which are important in current black hole research.
Cosmic event horizon
In cosmology, the event horizon of the observable universe is the largest comoving distance from which light emitted now can ever reach the observer in the future. This differs from the concept of the particle horizon, which represents the largest comoving distance from which light emitted in the past could reach the observer at a given time. For events that occur beyond that distance, light has not had enough time to reach our location, even if it was emitted at the time the universe began. The evolution of the particle horizon with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, parts of the universe will never be observable, no matter how long the observer waits for the light from those regions to arrive. The boundary beyond which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.
The criterion for determining whether a particle horizon for the universe exists is as follows. Define a comoving distance dp as
In this equation, a is the scale factor, c is the speed of light, and t0 is the age of the Universe. If (i.e., points arbitrarily as far away as can be observed), then no event horizon exists. If , a horizon is present.
Examples of cosmological models without an event horizon are universes dominated by matter or by radiation. An example of a cosmological model with an event horizon is a universe dominated by the cosmological constant (a de Sitter universe).
A calculation of the speeds of the cosmological event and particle horizons was given in a paper on the FLRW cosmological model, approximating the Universe as composed of non-interacting constituents, each one being a perfect fluid.
Apparent horizon of an accelerated particle
If a particle is moving at a constant velocity in a non-expanding universe free of gravitational fields, any event that occurs in that Universe will eventually be observable by the particle, because the forward light cones from these events intersect the particle's world line. On the other hand, if the particle is accelerating, in some situations light cones from some events never intersect the particle's world line. Under these conditions, an apparent horizon is present in the particle's (accelerating) reference frame, representing a boundary beyond which events are unobservable.
For example, this occurs with a uniformly accelerated particle. A spacetime diagram of this situation is shown in the figure to the right. As the particle accelerates, it approaches, but never reaches, the speed of light with respect to its original reference frame. On the spacetime diagram, its path is a hyperbola, which asymptotically approaches a 45-degree line (the path of a light ray). An event whose light cone's edge is this asymptote or is farther away than this asymptote can never be observed by the accelerating particle. In the particle's reference frame, there is a boundary behind it from which no signals can escape (an apparent horizon). The distance to this boundary is given by , where is the constant proper acceleration of the particle.
While approximations of this type of situation can occur in the real world (in particle accelerators, for example), a true event horizon is never present, as this requires the particle to be accelerated indefinitely (requiring arbitrarily large amounts of energy and an arbitrarily large apparatus).
Interacting with a cosmic horizon
In the case of a horizon perceived by a uniformly accelerating observer in empty space, the horizon seems to remain a fixed distance from the observer no matter how its surroundings move. Varying the observer's acceleration may cause the horizon to appear to move over time or may prevent an event horizon from existing, depending on the acceleration function chosen. The observer never touches the horizon and never passes a location where it appeared to be.
In the case of a horizon perceived by an occupant of a de Sitter universe, the horizon always appears to be a fixed distance away for a non-accelerating observer. It is never contacted, even by an accelerating observer.
Event horizon of a black hole
One of the best-known examples of an event horizon derives from general relativity's description of a black hole, a celestial object so dense that no nearby matter or radiation can escape its gravitational field. Often, this is described as the boundary within which the black hole's escape velocity is greater than the speed of light. However, a more detailed description is that within this horizon, all lightlike paths (paths that light could take) (and hence all paths in the forward light cones of particles within the horizon) are warped so as to fall farther into the hole. Once a particle is inside the horizon, moving into the hole is as inevitable as moving forward in time – no matter in what direction the particle is travelling – and can be thought of as equivalent to doing so, depending on the spacetime coordinate system used.
The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body that fits inside this radius (although a rotating black hole operates slightly differently). The Schwarzschild radius of an object is proportional to its mass. Theoretically, any amount of matter will become a black hole if compressed into a space that fits within its corresponding Schwarzschild radius. For the mass of the Sun, this radius is approximately ; for Earth, it is about . In practice, however, neither Earth nor the Sun have the necessary mass (and, therefore, the necessary gravitational force) to overcome electron and neutron degeneracy pressure. The minimal mass required for a star to collapse beyond these pressures is the Tolman–Oppenheimer–Volkoff limit, which is approximately three solar masses.
According to the fundamental gravitational collapse models, an event horizon forms before the singularity of a black hole. If all the stars in the Milky Way would gradually aggregate towards the galactic center while keeping their proportionate distances from each other, they will all fall within their joint Schwarzschild radius long before they are forced to collide. Up to the collapse in the far future, observers in a galaxy surrounded by an event horizon would proceed with their lives normally.
Black hole event horizons are widely misunderstood. Common, although erroneous, is the notion that black holes "vacuum up" material in their neighborhood, where in fact they are no more capable of seeking out material to consume than any other gravitational attractor. As with any mass in the universe, matter must come within its gravitational scope for the possibility to exist of capture or consolidation with any other mass. Equally common is the idea that matter can be observed falling into a black hole. This is not possible. Astronomers can detect only accretion disks around black holes, where material moves with such speed that friction creates high-energy radiation that can be detected (similarly, some matter from these accretion disks is forced out along the axis of spin of the black hole, creating visible jets when these streams interact with matter such as interstellar gas or when they happen to be aimed directly at Earth). Furthermore, a distant observer will never actually see something reach the horizon. Instead, while approaching the hole, the object will seem to go ever more slowly, while any light it emits will be further and further redshifted.
Topologically, the event horizon is defined from the causal structure as the past null cone of future conformal timelike infinity. A black hole event horizon is teleological in nature, meaning that it is determined by future causes. More precisely, one would need to know the entire history of the universe and all the way into the infinite future to determine the presence of an event horizon, which is not possible for quasilocal observers (not even in principle). In other words, there is no experiment and/or measurement that can be performed within a finite-size region of spacetime and within a finite time interval that answers the question of whether or not an event horizon exists. Because of the purely theoretical nature of the event horizon, the traveling object does not necessarily experience strange effects and does, in fact, pass through the calculated boundary in a finite amount of its proper time.
Interacting with black hole horizons
A misconception concerning event horizons, especially black hole event horizons, is that they represent an immutable surface that destroys objects that approach them. In practice, all event horizons appear to be some distance away from any observer, and objects sent towards an event horizon never appear to cross it from the sending observer's point of view (as the horizon-crossing event's light cone never intersects the observer's world line). Attempting to make an object near the horizon remain stationary with respect to an observer requires applying a force whose magnitude increases unboundedly (becoming infinite) the closer it gets.
In the case of the horizon around a black hole, observers stationary with respect to a distant object will all agree on where the horizon is. While this seems to allow an observer lowered towards the hole on a rope (or rod) to contact the horizon, in practice this cannot be done. The proper distance to the horizon is finite, so the length of rope needed would be finite as well, but if the rope were lowered slowly (so that each point on the rope was approximately at rest in Schwarzschild coordinates), the proper acceleration (G-force) experienced by points on the rope closer and closer to the horizon would approach infinity, so the rope would be torn apart. If the rope is lowered quickly (perhaps even in freefall), then indeed the observer at the bottom of the rope can touch and even cross the event horizon. But once this happens it is impossible to pull the bottom of rope back out of the event horizon, since if the rope is pulled taut, the forces along the rope increase without bound as they approach the event horizon and at some point the rope must break. Furthermore, the break must occur not at the event horizon, but at a point where the second observer can observe it.
Assuming that the possible apparent horizon is far inside the event horizon, or there is none, observers crossing a black hole event horizon would not actually see or feel anything special happen at that moment. In terms of visual appearance, observers who fall into the hole perceive the eventual apparent horizon as a black impermeable area enclosing the singularity. Other objects that had entered the horizon area along the same radial path but at an earlier time would appear below the observer as long as they are not entered inside the apparent horizon, and they could exchange messages. Increasing tidal forces are also locally noticeable effects, as a function of the mass of the black hole. In realistic stellar black holes, spaghettification occurs early: tidal forces tear materials apart well before the event horizon. However, in supermassive black holes, which are found in centers of galaxies, spaghettification occurs inside the event horizon. A human astronaut would survive the fall through an event horizon only in a black hole with a mass of approximately 10,000 solar masses or greater.
Beyond general relativity
A cosmic event horizon is commonly accepted as a real event horizon, whereas the description of a local black hole event horizon given by general relativity is found to be incomplete and controversial. When the conditions under which local event horizons occur are modeled using a more comprehensive picture of the way the Universe works, that includes both relativity and quantum mechanics, local event horizons are expected to have properties that are different from those predicted using general relativity alone.
At present, it is expected by the Hawking radiation mechanism that the primary impact of quantum effects is for event horizons to possess a temperature and so emit radiation. For black holes, this manifests as Hawking radiation, and the larger question of how the black hole possesses a temperature is part of the topic of black hole thermodynamics. For accelerating particles, this manifests as the Unruh effect, which causes space around the particle to appear to be filled with matter and radiation.
According to the controversial black hole firewall hypothesis, matter falling into a black hole would be burned to a crisp by a high energy "firewall" at the event horizon.
An alternative is provided by the complementarity principle, according to which, in the chart of the far observer, infalling matter is thermalized at the horizon and reemitted as Hawking radiation, while in the chart of an infalling observer matter continues undisturbed through the inner region and is destroyed at the singularity. This hypothesis does not violate the no-cloning theorem as there is a single copy of the information according to any given observer. Black hole complementarity is actually suggested by the scaling laws of strings approaching the event horizon, suggesting that in the Schwarzschild chart they stretch to cover the horizon and thermalize into a Planck length-thick membrane.
A complete description of local event horizons generated by gravity is expected to, at minimum, require a theory of quantum gravity. One such candidate theory is M-theory. Another such candidate theory is loop quantum gravity.
See also
Abraham–Lorentz force
Acoustic metric
Beyond black holes
Black hole electron
Black hole starship
Cosmic censorship hypothesis
Dynamical horizon
Event Horizon Telescope
Hawking radiation
Kugelblitz (astrophysics)
Micro black hole
Rindler coordinates
Notes
References
Further reading
Concepts in astrophysics
Black holes
General relativity
Physical phenomena | Event horizon | [
"Physics",
"Astronomy"
] | 3,170 | [
"Black holes",
"Physical phenomena",
"Concepts in astrophysics",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"General relativity",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
29,321,988 | https://en.wikipedia.org/wiki/Short%20supermultiplet | In theoretical physics, a short supermultiplet is a supermultiplet i.e. a representation of the supersymmetry algebra whose dimension is smaller than where is the number of real supercharges. The representations that saturate the bound are known as the long supermultiplets.
The states in a long supermultiplet may be produced from a representative by the action of the lowering and raising operators, assuming that for any basis vector, either the lowering operator or its conjugate raising operator produce a new nonzero state. This is the reason for the dimension indicated above. On the other hand, the short supermultiplets admit a subset of supercharges that annihilate the whole representation. That is why the short supermultiplets contain the BPS states, another description of the same concept.
The BPS states are only possible for objects that are either massless or massive extremal, i.e. carrying a maximum allowed value of some central charges.
Supersymmetry | Short supermultiplet | [
"Physics"
] | 220 | [
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum physics stubs",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry"
] |
24,693,669 | https://en.wikipedia.org/wiki/Lanthanum%20barium%20copper%20oxide | Lanthanum barium copper oxide, or LBCO, is an inorganic compound with the formula CuBa0.15La1.85O4. It is a black solid produced by heating an intimate mixture of barium oxide, copper(II) oxide, and lanthanum oxide in the presence of oxygen. The material was discovered in 1986 and was the first high temperature superconductor. Johannes Georg Bednorz and K. Alex Müller shared the 1987 Nobel Prize in physics for the discovery that this material exhibits superconductivity at the then unusually high temperature. This finding led to intense and fruitful efforts to generate other cuprate superconductors.
Lanthanum barium copper oxide is related to the far simpler compound lanthanum cuprate, which has a similar structure. In lanthanum barium copper oxide, some of the La(III) centers are replaced by Ba(II), which has a similar ionic radius. This Ba-for-La replacement causes removal of some electrons (hole doping) from the d-band associated with the sheets of copper oxide. As a function of such chemical doping, the system changes its ground state from Mott insulating to superconducting. Before reaching optimally doping (around x=0.16) the La2-xBaxCuO4, superconductivity is partially suppressed at the specific x=1/8 doping due to emergence of charge stripe order.
References
High-temperature superconductors
Lanthanum compounds
Barium compounds
Copper compounds
Oxides | Lanthanum barium copper oxide | [
"Physics",
"Chemistry",
"Materials_science"
] | 317 | [
"Materials science stubs",
"Oxides",
"Salts",
"Condensed matter physics",
"Condensed matter stubs"
] |
24,698,826 | https://en.wikipedia.org/wiki/Reprocessed%20uranium | Reprocessed uranium (RepU) is the uranium recovered from nuclear reprocessing, as done commercially in France, the UK and Japan and by nuclear weapons states' military plutonium production programs. This uranium makes up the bulk of the material separated during reprocessing.
Commercial LWR spent nuclear fuel contains on average (excluding cladding) only four percent plutonium, minor actinides and fission products by weight. Despite it often containing more fissile material than natural uranium, reuse of reprocessed uranium has not been common because of low prices in the uranium market of recent decades, and because it contains undesirable isotopes of uranium.
Given sufficiently high uranium prices, it is feasible for reprocessed uranium to be re-enriched and reused. It requires a higher enrichment level than natural uranium to compensate for its higher levels of 236U which is lighter than 238U and therefore concentrates in the enriched product. As enrichment concentrates lighter isotopes on the "enriched" side and heavier isotopes on the "depleted" side, will inevitably be enriched slightly stronger than , which is a negligible effect in a once-through fuel cycle due to the low (55 ppm) share of in natural uranium but can become relevant after successive passes through an enrichment-burnup-reprocessing-enrichment cycle, depending on enrichment and burnup characteristics. readily absorbs thermal neutrons and converts to fissile , which needs to be taken into account if it reaches significant proportions of the fuel material. If interacts with a fast neutron there is a chance of a (n,2n) "knockout" reaction. Depending on the characteristics of the reactor and burnup, this can be a larger source of in spent fuel than enrichment.
If fast breeder reactors ever come into widespread commercial use, reprocessed uranium, like depleted uranium, will be usable in their breeding blankets.
There have been some studies involving the use of reprocessed uranium in CANDU reactors. CANDU is designed to use natural uranium as fuel; the 235U content remaining in spent PWR/BWR fuel is typically greater than that found in natural uranium, which is about 0.72% 235U, allowing the re-enrichment step to be skipped. Fuel cycle tests also have included the DUPIC (Direct Use of spent PWR fuel In CANDU) fuel cycle, where used fuel from a pressurized water reactor (PWR) is packaged into a CANDU fuel bundle with only physical reprocessing (cut into pieces) but no chemical reprocessing. Opening the cladding inevitably releases volatile fission products like xenon, tritium or krypton-85. Some variations of the DUPIC fuel cycle make deliberate use of this by including a voloxidation step whereby the fuel is heated to drive off semi-volatile fission products or subjected to one or more reduction / oxidation cycles to transform nonvolatile oxides into volatile native elements and vice versa.
The direct use of recovered uranium to fuel a CANDU reactor was first demonstrated at Qinshan Nuclear Power Plant in China. The first use of re-enriched uranium in a commercial LWR was in 1994 at the Cruas Nuclear Power Plant in France.
In 2020, France, one of the countries with the biggest reprocessing capacity, held a stock of of reprocessed uranium, up from in 2010. Every year France processes of spent fuel into reactor grade plutonium (for immediate further processing into MOX fuel) and of reprocessed uranium which is largely stockpiled. There are provisions in place for the storage of this reprocessed uranium for up to 250 years for potential future use. Given France's domestic uranium enrichment capabilities, this stockpile constitutes a strategic reserve for the case of a major disruption of uranium supply as France does not have domestic uranium mining.
References
Further reading
Advanced Fuel Cycle Cost Basis - Idaho National Laboratory
Module K2 Aqueously Reprocessed Uranium Conversion and Disposition
Module K3 Pyrochemically/Pyrometallurgically Reprocessed Uranium Conversion and Disposition
Nuclear materials
Nuclear reprocessing
Uranium | Reprocessed uranium | [
"Physics"
] | 854 | [
"Materials",
"Nuclear materials",
"Matter"
] |
44,529,150 | https://en.wikipedia.org/wiki/Multirate%20filter%20bank%20and%20multidimensional%20directional%20filter%20banks | This article provides a short survey of the concepts, principles and applications of Multirate filter banks and Multidimensional Directional filter banks.
Multirate systems
Linear time-invariant systems typically operate at a single sampling rate, which means that we have the same sampling rate at input and output. In other words, in an LTI system, the sampling rate would not change in the system.
Systems that use different sampling rates at different stages are called multirate systems. The multirate system can have different sampling rates based on desire.
Also multirate systems can provide different sampling rates without destroying the signal components.
In Figure 1, you can see a block diagram of a two channel multirate system.
Multirate filter bank
A multirate filter bank divides a signal into a number of subbands, which can be analysed at different rates corresponding to the bandwidth of the frequency bands. One important fact in multirate filtering is that the signal should be filtered before decimation, otherwise aliasing and frequency folding would occur.
Multirate filter designs
Multirate filter design makes use of properties of decimation and interpolation (or expansion) in the design implementation of the filter.
Decimation or downsampling by a factor of essentially means keeping every sample of a given sequence.
Decimation, interpolation, and modulation
Generally speaking, using decimation is very common in multirate filter designs.
In the second step, after using decimation, interpolation will be used to restore the sampling rate.
The advantage of using decimators and interpolator is that they can reduce the computations when resulting in a lower sampling rate.
Decimation by a factor of can be mathematically defined as:
or equivalently,
.
Expansion or upsampling by a factor of M means that we insert M-1 zeros between each sample of a given signal or a sequence.
The expansion by a factor of M can be mathematically explained as:
or equivalently, .
Modulation is needed for different kinds of filter designs.
For instance, in many communication applications we need to modulate the signal to baseband.
After using lowpass filtering for the baseband signal, we use modulation and change the baseband signal to the center frequency of the bandpass filter.
Here we provide two examples of designing multirate narrow lowpass and narrow bandpass filters.
Narrow lowpass filter
We can define a narrow lowpass filter as a lowpass filter with a narrow passband.
In order to create a multirate narrow lowpass FIR filter, we need to replace the time invariant FIR filter with a lowpass antialiasing filter and use a decimator along with an interpolator and lowpass anti-imaging filter.
In this way the resulting multirate system would be a time varying linear phase filter via the decimator and interpolator.
This process is explained in block diagram form where Figure 2 (a) is replaced by Figure 2(b).
The lowpass filter consists of two polyphase filters, one for the decimator and one for the interpolator.
Multirate filter bank
Filter banks has different usage in many areas, such as signal and image compression, and processing.
The main usage of using filter banks is that in this way we can divide the signal or system to several separate frequency domains.
A filter bank divides the input signal into a set of signals . In this way each of the generated signals corresponds to a different region in the spectrum of .
In this process it can be possible for the regions overlap (or not, based on application).
Figure 4 shows an example of a three-band filter bank.
The generated signals can be generated via a collection of set of bandpass filters with bandwidths and center frequencies (respectively).
A multirate filter bank use a single input signal and then produces multiple outputs of the signal by filtering and subsampling.
In order to split the input signal into two or more signals (see Figure 5) an analysis-synthesis system can be used .
In figure 5, only 4 sub-signals are used.
The signal would split with the help of four filters for k =0,1,2,3 into 4 bands of the same bandwidths (In the analysis bank) and then each sub-signal is decimated by a factor of 4.
In each band by dividing the signal in each band, we would have different signal characteristics.
In synthesis section the filter will reconstruct the original signal:
First, upsampling the 4 sub-signals at the output of the processing unit by a factor of 4 and then filtere by 4 synthesis filters for k = 0,1,2,3.
Finally, the outputs of these four filters are added.
Multidimensional filter banks
Multidimensional Filtering, downsampling, and upsampling are the main parts of multidimensional multirate systems and filter banks.
A complete filter bank consists of the analysis and synthesis sides. The analysis filter bank divides an input signal to different subbands with different frequency spectra. The synthesis part reassembles the different subband signals and generates a reconstructed signal.
Two of the basic building blocks are the decimator and expander. As illustrated in Figure 6, the input gets divided into four directional sub bands that each of them covers one of the wedge-shaped frequency regions. In 1D systems, M-fold decimators keep only those samples that are multiples of M and discard the rest. while in multi-dimensional systems the decimators are D X D nonsingular integer matrix. it considers only those samples that are on the lattice generated by the decimator. Commonly used decimator is the quincunx decimator whose lattice is generated from the Quincunx matrix which is defined by . A quincunx lattice that is generated by this matrix is shown in figure.
It is important to analyze filter banks from a frequency domain perspective in terms of subband decomposition and reconstruction. However, equally important is a Hilbert space interpretation of filter banks, which plays a key role in geometrical signal representations.
For a generic K-channel filter bank, with analysis filters , synthesis filters , and sampling matrices .
In the analysis side, we can define vectors in as
, each index by two parameters: and .
Similarly, for the synthesis filters we can define .
Considering the definition of analysis/synthesis sides we can verify that
and for reconstruction part .
In other words, the analysis filter bank calculate the inner product of the input signal and the vector from analysis set. Moreover, the reconstructed signal in the combination of the vectors from the synthesis set, and the combination coefficients of the computed inner products, meaning that
If there is no loss in the decomposition and the subsequent reconstruction, the filter bank is called perfect reconstruction. (in that case we would have .
Multidimensional filter banks design
1-D filter banks have been well developed until today. However, many signals, such as image, video, 3D sound, radar, sonar, are multidimensional, and require the design of multidimensional filter banks.
With the fast development of communication technology, signal processing systems need more room to store data during the processing, transmission and reception. In order to reduce the data to be processed, save storage and lower the complexity, multirate sampling techniques were introduced to achieve these goals. Filter banks can be used in various areas, such as image coding, voice coding, radar and so on.
Many 1D filter issues were well studied and researchers proposed many 1D filter bank design approaches. But there are still many multidimensional filter bank design problems that need to be solved.[6] Some methods may not well reconstruct the signal; some methods are complex and hard to implement.
Design of separable filter bank
The simplest approach to design a multi-dimensional filter banks is to cascade 1D filter banks in the form of a tree structure where the decimation matrix is diagonal and data is processed in each dimension separately. Such systems are referred to as separable systems.
Design of non-separable multidimensional filter banks
Below are several approaches on the design of multidimensional filter banks.
2-channel multidimensional perfect reconstruction (PR) filter banks
In real life, we always want to reconstruct the divided signal back to the original one, which makes PR filter banks very important.
Let H(z) be the transfer function of a filter. The size of the filter is defined as the order of corresponding polynomial in every dimension. The symmetry or anti-symmetry of a polynomial determines the linear phase property of the corresponding filter and is related to its size.
Like the 1D case, the aliasing term A(z) and transfer function T(z) for a 2 channel filter bank are:
A(z)=1/2(H0(-z) F0 (z)+H1 (-z) F1 (z));
T(z)=1/2(H0 (z) F0 (z)+H1 (z) F1 (z)),
where H0 and H1 are decomposition filters, and F0 and F1 are reconstruction filters.
The input signal can be perfectly reconstructed if the alias term is cancelled and T(z) equal to a monomial. So the necessary condition is that T'(z) is generally symmetric and of an odd-by-odd size.
Linear phase PR filters are very useful for image processing. This 2-Channel filter bank is relatively easy to implement. But 2 channels sometimes are not enough for use. 2-channel filter banks can be cascaded to generate multi-channel filter banks.
To understand the working of 2-Channel Multidimensional filter banks we must first understand the design process of a simple 2D two-channel filter banks. In particular the diamond filter banks are of special interest in some image coding applications. The decimation matrix M for the diamond filter bank is usually the quincunx matrix which is briefly discussed in the above sections. For a two-channel system, there are only four filters, two analysis filters and two synthesis filters. So in some designs, two or three filters are chosen so that there is no aliasing and the remaining filters are then optimized to achieve approximate reconstruction. Design of 2D filters are more complex than 1D filters. So we usually use appropriate mapping techniques to achieve perfect reconstruction.. A polyphase mapping method is proposed to design an IIR analysis filter. For filter banks with FIR filter types, several 1D to 2D transformations have been considered. For example, the McClellan transformation is used to achieve the FIR filter banks.
There has also been some interest in quadrant filters as shown. The decimation matrix for a quadrant filter bank is given by D =
The fan filters are shifted versions of the diamond filters and hence the diamond filter banks are designed and can be shifted by ( π 0 ) in the frequency domain to obtain a fan filter. Filter banks in which the filters have parallelogram support are also of some importance. Several parallelogram supports for analysis and synthesis filters are also shown. These filters can be derived from the diamond filters by using uni-modular transformation.
Tree-structured filter banks
For any given subband analysis filter bank, we can split it into further subbands as shown in figure 8. By repeating this operation we can actually build a tree-structured analysis bank. Example of a 1D tree structured filter bank is the one that results in an octave stacking of the passbands. In the 2D case, tree structures based on simple two-channel modules can offer sophisticated band-splitting schemes, especially if we combine the various configurations shown above. The directional filter bank which will be discussed below is one such example.
Multidimensional directional filter banks
M-dimensional directional filter banks (MDFB) are a family of filter banks that can achieve the directional decomposition of arbitrary M-dimensional signals with a simple and efficient tree-structured construction. They have many distinctive properties like: directional decomposition, efficient tree construction, angular resolution and perfect reconstruction.
In the general M-dimensional case, the ideal frequency supports of the MDFB are hypercube-based hyperpyramids. The first level of decomposition for MDFB is achieved by an N-channel undecimated filter bank, whose component filters are M-D "hourglass"-shaped filter aligned with the w1,...,wM respectively axes. After that, the input signal is further decomposed by a series of 2-D iteratively resampled checkerboard filter banks IRCli(Li)(i=2,3,...,M), where IRCli(Li)operates on 2-D slices of the input signal represented by the dimension pair (n1,ni) and superscript (Li) means the levels of decomposition for the ith level filter bank. Note that, starting from the second level, we attach an IRC filter bank to each output channel from the previous level, and hence the entire filter has a total of 2(L1+...+LN) output channels.
Multidimensional oversampled filter banks
Oversampled filter banks are multirate filter banks where the number of output samples at the analysis stage is larger than the number of input samples. It is proposed for robust applications. One particular class of oversampled filter banks is nonsubsampled filter banks without downsampling or upsampling. The perfect reconstruction condition for an oversampled filter bank can be stated as a matrix inverse problem in the polyphase domain.
For IIR oversampled filter banks, perfect reconstruction have been studied in Wolovich and Kailath. in the context of control theory. While for FIR oversampled filter banks we have to use a different strategy for 1-D and M-D, FIR filter are more popular since they are easier to implement. For 1-D oversampled FIR filter banks, the Euclidean algorithm plays a key role in the matrix inverse problem. However, the Euclidean algorithm fails for multidimensional (MD) filters. For MD filter, we can convert the FIR representation into a polynomial representation. and then use Algebraic geometry and Gröbner bases to get the framework and the reconstruction conditions for the multidimensional oversampled filter banks.
Multidimensional filter banks using Grobner bases
The general multidimensional filter bank (Figure 7) can be represented by a pair of analysis and synthesis polyphase matrices and of size and , where N is the number of channels and is the absolute value of the determinant of the sampling matrix. Also and are the z-transform of the polyphase components of the analysis and synthesis filters. Therefore, they are multivariate Laurent polynomials, which have the general form:
.
Laurent polynomial matrix equation need to be solve to design perfect reconstruction filter banks: .
In the multidimensional case with multivariate polynomials we need to use the theory and algorithms of Grobner bases (developed by Buchberger)
"Grobner bases" can be used to characterizing perfect reconstruction multidimensional filter banks, but it first need to extend from polynomial matrices to Laurent polynomial matrices.
The Grobner basis computation can be considered equivalently as Gaussian elimination for solving the polynomial matrix equation .
If we have set of polynomial vectors
where are polynomials.
The module is analogous to the span of a set of vectors in linear algebra. The theory of Grobner bases implies that the Module has a unique reduced Grobner basis for a
given order of power products in polynomials.
If we define the Grobner basis as , it can be
obtained from by a finite sequence of reduction
(division) steps.
Using reverse engineering, we can compute
the basis vectors in terms of the original vectors through a transformation matrix as
Mapping-based multidimensional filter banks
Designing filters with good frequency responses is challenging via Grobner bases approach.
Mapping based design in popularly used to design nonseparable multidimensional filter banks with good frequency responses.
The mapping approaches have certain restrictions on the kind of filters; However, it brings many important advantages, such as efficient implementation via lifting/ladder structures. Here we provide an example of two-channel filter banks in 2D with sampling matrix
We would have several possible choices of ideal frequency responses of the channel filter and . (Note that the other two filters and are supported on complementary regions.)
All the frequency regions in Figure can be critically sampled by the rectangular lattice spanned by .
So imagine the filter bank achieves perfect reconstruction
with FIR filters. Then from the polyphase domain characterization it follows that the filters H1(z) and G1(z) are completely
specified by H0(z) and G0(z), respectively. Therefore, we need to design H0(x) and G0(z) which have desired frequency responses and satisfy the polyphase-domain conditions.
There are different mapping technique that can be used to get above result.
Filter banks design in the frequency domain
If we do not want perfect reconstruction filter banks using FIR filters, the design problem can be simplified by working in frequency domain instead of using FIR filters.
Note that the frequency domain method is not limited to the design of nonsubsampled filter banks (read ).
Directional filter banks
Bamberger and Smith proposed a 2D directional filter bank (DFB).
The DFB is efficiently implemented via an l-level tree-structured decomposition that leads to subbands with wedge-shaped frequency partition (see Figure ).
The original construction of the DFB involves modulating the input signal and using diamond-shaped filters.
Moreover, in order to obtain the desired frequency partition, a complicated tree expanding rule has to be followed. As a result, the frequency regions
for the resulting subbands do not follow a simple ordering as shown in Figure 9 based on the channel indices.
The first advantage of DFB is that not only it is not a redundant transform but also it offers perfect reconstruction.
Another advantage of DFB is its directional-selectivity and efficient structure.
This advantage makes DFB an appropriate approach for many signal and image processing usage.
Directional filter banks can be developed to higher dimensions. It can be used in 3-D to achieve frequency sectioning.
These kinds of filters can be used in selective filtering purposes to record and save signals information and features.
Some other advantages of NDFB can be addressed as follow:
Directional decomposition, Construction, Angular resolution, Perfect reconstruction, and Small redundancy.
Multidimensional directional filter banks
N-dimensional directional filter banks (NDFB) can be used in capturing signals features and information.
There are a number of studies regarding capturing signals information in 2-D(e.g., steerable pyramid, the directional filter bank, 2-D directional wavelets, curvelets, complex (dual-tree) wavelets, contourlets, and bandelets), with reviews for instance in.
Conclusion and application
Filter banks play an important role in different aspects of signal processing these days.
They have different usage in many areas, such as signal and image compression, and processing.
The main usage of filter banks is that in this way we can divide the signal or system to several separate frequency components.
Depending on our purpose we can choose different methods to design the filters.
In this page we provide information regarding filter banks, multidimensional filter banks and different methods to design multidimensional filters.
Also we talked about MDFB, which is built upon an efficient tree-structured construction, which leads to a low redundancy ratio and refinable angular resolution.
By combining the MDFB with a new multiscale pyramid, we can constructed the surfacelet transform, which has potentials in efficiently capturing and representing surface-like singularities in multidimensional signals.
MDFB and surfacelet transform have applications in various areas that involve the processing of multidimensional volumetric data, including video processing, seismic image processing, and medical image analysis.
Some other advantages of MDFB include: directional decomposition, construction, angular resolution, perfect reconstruction, and small redundancy.
References
Filter theory
Multidimensional signal processing | Multirate filter bank and multidimensional directional filter banks | [
"Engineering"
] | 4,135 | [
"Telecommunications engineering",
"Filter theory"
] |
44,532,469 | https://en.wikipedia.org/wiki/Finite%20volume%20method%20for%20two%20dimensional%20diffusion%20problem | The methods used for solving two dimensional Diffusion problems are similar to those used for one dimensional problems. The general equation for steady diffusion can be easily derived from the general transport equation for property Φ by deleting transient and convective terms
where,
is the Diffusion coefficient and is the Source term.
A portion of the two dimensional grid used for Discretization is shown below:
In addition to the east (E) and west (W) neighbors, a general grid node P, now also has north (N) and south (S) neighbors. The same notation is used
here for all faces and cell dimensions as in one dimensional analysis. When the above equation is formally integrated over the Control volume, we obtain
Using the divergence theorem, the equation can be rewritten as :
This equation represents the balance of generation of the property φ in a Control volume and the fluxes through its cell faces. The derivatives can by represented as follows by using Taylor series approximation:
Flux across the east face =
Flux across the south face =
Flux across the north face =
Substituting these expressions in equation (2) we obtain
When the source term is represented in linearized form ,
this equation can be rearranged as,
=
This equation can now be expressed in a general discretized equation form for internal nodes, i.e.,
Where,
The face areas in y two dimensional case are :
and
.
We obtain the distribution of the property i.e. a given two dimensional situation by writing discretized equations of the form of equation (3) at each grid node of the subdivided domain. At the boundaries where the temperature or fluxes are known the discretized equation are modified to incorporate the boundary conditions. The boundary side coefficient is set to zero (cutting the link with the boundary) and the flux crossing this boundary is introduced as a source which is appended to any existing and terms. Subsequently the resulting set of equations is solved to obtain the two dimensional distribution of the property
References
Patankar, Suhas V. (1980), Numerical Heat Transfer and Fluid Flow, Hemisphere.
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley.
Laney, Culbert B.(1998), Computational Gas Dynamics, Cambridge University Press.
LeVeque, Randall(1990), Numerical Methods for Conservation Laws, ETH Lectures in Mathematics Series, Birkhauser-Verlag.
Tannehill, John C., et al., (1997), Computational Fluid mechanics and Heat Transfer, 2nd Ed., Taylor and Francis.
Wesseling, Pieter(2001), Principles of Computational Fluid Dynamics, Springer-Verlag.
Carslaw, H. S. and Jager, J. C. (1959). Conduction of Heat in Solids. Oxford: Clarendon Press
Crank, J. (1956). The Mathematics of Diffusion. Oxford: Clarendon Press
Thambynayagam, R. K. M (2011). The Diffusion Handbook: Applied Solutions for Engineers: McGraw-Hill
External links
http://opencourses.emu.edu.tr/course/view.php?id=27&lang=en
https://web.archive.org/web/20120303230200/http://nptel.iitm.ac.in/courses/112105045/
http://ingforum.haninge.kth.se/armin/CFD/dirCFD.htm
Finite volume method, Cheng Long
Finite volume method, Robert Eymard et al. (2010), Scholarpedia,5(6):9835
See also
Computational fluid dynamics
Finite difference
Heat equation
Fokker–Planck equation
Fick's laws of diffusion
Maxwell–Stefan equation
Diffusion equation
Convection–diffusion equation
Computational fluid dynamics | Finite volume method for two dimensional diffusion problem | [
"Physics",
"Chemistry"
] | 796 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
44,535,093 | https://en.wikipedia.org/wiki/Vladimir%20Kadyshevsky | Vladimir Kadyshevsky (5 May 1937 – 24 September 2014) was a Russian theoretical physicist.
Biography
Kadyshevsky was born on 5 May 1937 in Moscow. He studied at the Suvorov Military School in Sverdlovsk from 1946 to 1954, before entering the physics department of the Lomonosov Moscow State University (MSU).
He graduated in 1960 and continued his studies as a postgraduate under Nikolay Bogolyubov. He successfully defended his PhD thesis in 1962, before starting work at the Laboratory of Theoretical Physics of JINR.
In 1977-78 he headed the group of Soviet physicists working at Fermilab.
In 1983-85 he was leader of JINR’s programme for the DELPHI experiment at CERN’s LEP collider.
He was director of the JINR Laboratory of Theoretical Physics from 1987 to 1992, and director of JINR from
1992 to 2005.
External links
Vladimir Georgievich Kadyshevsky
Vladimir Kadyshevsky Celebrates His 70th in Style
Vladimir Georgievich Kadyshevsky 1938–2014 (obituary)
Scientific publications of Vladimir Kadyshevsky on INSPIRE-HEP
1937 births
2014 deaths
Moscow State University alumni
Academic staff of Moscow State University
Russian physicists
Russian theoretical physicists
Particle physicists
Full Members of the Russian Academy of Sciences
Foreign members of the Bulgarian Academy of Sciences
Foreign fellows of the Indian National Science Academy
People associated with CERN
Recipients of the Order of Honour (Russia) | Vladimir Kadyshevsky | [
"Physics"
] | 296 | [
"Particle physicists",
"Particle physics"
] |
34,868,725 | https://en.wikipedia.org/wiki/EXOC3L2 | Exocyst complex component 3-like 2 is a protein that in humans is encoded by the EXOC3L2 gene.
The EXOC3L2 protein has been shown to interact with EXOC4 that is a component of the exocyst complex involved exocytosis and more specifically in the targeting of exocytic vesicles to the cell membrane.
The exocyst complex is important for several biological processes, such as the establishment of cell polarity and regulation of cell migration. The structure and functions of the exocyst complex are conserved from yeast to higher eukaryotes. Endothelial cells in blood vessels express high levels of EXOC3L2 that is required for proper VEGFR-2 signaling so that the endothelial cells can migrate towards the growth factor VEGF-A.
References
Protein complexes
Proteins | EXOC3L2 | [
"Chemistry"
] | 173 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
34,869,196 | https://en.wikipedia.org/wiki/Porod%27s%20law | In X-ray or neutron small-angle scattering (SAS), Porod's law, discovered by Günther Porod, describes the asymptote of the scattering intensity I(q) for large scattering wavenumbers q.
Context
Porod's law is concerned with wave numbers q that are small compared to the scale of usual Bragg diffraction; typically . In this range, the sample must not be described at an atomistic level; one rather uses a continuum description in terms of an electron density or a neutron scattering length density. In a system composed of distinct mesoscopic particles, all small-angle scattering can be understood as arising from surfaces or interfaces. Normally, SAS is measured in order to detect correlations between different interfaces, and in particular, between remote surface segments of one and the same particle. This allows conclusions about the size and shape of the particles, and their correlations.
Porod's q is relatively large on the usual scale of SAS. In this regime, correlations between remote surface segments and inter-particle correlations are so random that they average out. Therefore one sees only the local interface roughness.
Standard form
If the interface is flat, then Porod's law predicts the scattering intensity
where S is the surface area of the particles, which can in this way be experimentally determined. The power law q−4 corresponds to the factor 1/sin4θ in Fresnel equations of reflection.
Generalized form
Since the advent of fractal mathematics it has become clear that Porod's law requires adaptation for rough interfaces because the value of the surface S may be a function of q (the yardstick by which it is measured). In the case of a fractally rough surface area with a dimensionality d between 2-3 Porod's law becomes:
Thus if plotted logarithmically the slope of ln(I) versus ln(q) would vary between -4 and -3 for such a surface fractal. Slopes less negative than -3 are also possible in fractal theory and they are described using a volume fractal model in which the whole system can be described to be self-similar mathematically although not usually in reality in the nature.
Derivation
as Form factor asymptote
For a specific model system, e.g. a dispersion of uncorrelated spherical particles, one can derive Porod's law by computing the scattering function S(q) exactly, averaging over slightly different particle radii, and taking the limit .
by considering just an interface
Alternatively, one can express S(q) as a double surface integral, using Ostrogradsky's theorem. For a flat surface in the xy-plane, one obtains
Taking the spherical average over possible directions of the vector q, one obtains Porod's law in the form
Notes
References
Small-angle scattering
X-ray scattering
Neutron scattering | Porod's law | [
"Chemistry"
] | 601 | [
"X-ray scattering",
"Scattering",
"Neutron scattering"
] |
34,870,612 | https://en.wikipedia.org/wiki/GcMAF | GcMAF (or Gc protein-derived macrophage activating factor) is a protein produced by modification of vitamin D-binding protein. It has been falsely promoted as a treatment for various medical conditions, but claims of its benefits are not supported by evidence.
Biochemically, GcMAF results from sequential deglycosylation of the vitamin D-binding protein (the Gc protein), which is naturally promoted by lymphocytes (B and T cells). The resulting protein may be a macrophage activating factor (MAF). MAFs are lymphokines that control the expression of antigens on the surface of macrophages, and one of their functions is to make macrophages become cytotoxic to tumors.
False claims
Since around 2008, GcMAF has been promoted as a cure for cancer, HIV, autism and other conditions.
Three out of four of the original studies authored by Yamamoto (published between 2007 and 2009) were retracted by the scientific journals in which they were published in 2014, officially due to irregularities in the way ethical approval was granted. Retraction reasons also included methodological errors in the studies. The integrity of the research, conducted by Nobuto Yamamoto and colleagues, that originally prompted claims regarding cancer and HIV has been questioned.
The UK Medicines and Healthcare products Regulatory Agency and Cancer Research UK has warned the public about spurious claims of clinical benefits, misleadingly based on reduced levels of the alpha-N-acetylgalactosaminidase enzyme (also known as nagalase), whose production might be increased in many cancers.
In 2014 the Belgian Anticancer Fund communicated serious concerns about published studies on GcMAF by Yamamoto and colleagues.
In 2015 the UK Medicines and Healthcare products Regulatory Agency (MHRA) closed a factory in Milton, Cambridgeshire owned by David Noakes' company Immuno Biotech that manufactured GcMAF for cancer treatment.
In September 2018 Noakes pleaded guilty in UK to manufacturing a medicinal product without a manufacturer's licence, selling or supplying medicinal products without market authorisation, and money laundering, and sentenced to 15 months of jail.
In April 2021 Noakes pleaded guilty in France to manufacturing and selling fake medicinal products and cosmetics by Internet and sentenced to 4 years of jail.
A 2019 Business Insider report detailed the activities of Amanda Mary Jewell, who sold GcMAF for years as a(n unlicensed) cure for several medical conditions, including cancer and autism. Jewell is not a medical doctor.
First Generation GcMAF
Gc protein-derived macrophage-activating factor (GcMAF).
GcMAF initially conceptualized by Nobuto Yamamoto in 1991, has been researched as a possible cancer treatment. Previous research efforts involved the isolation of Gc protein (1f1f subtype) from human serum through an affinity column modified with 25-hydroxyvitamin D3. GcMAF was enzymatically derived from the isolated Gc protein.
New Generation of GcMAF from Japan
The 2nd and 3rd generation GcMAF were developed by the Japanese organizations which hold the patents: in the USA (2014, 2016, 2017), Japan (2015), the EU (2016), Australia (2016), Israel (2018).
See also
List of ineffective cancer treatments
References
Alternative cancer treatments
Cytokines
Human proteins
Vitamin D
Health fraud products | GcMAF | [
"Chemistry"
] | 695 | [
"Cytokines",
"Signal transduction"
] |
34,870,634 | https://en.wikipedia.org/wiki/Slotted%20angle | Slotted angle (also sometimes referred to as slotted angle iron) is a system of reusable metal strips used to construct shelving, frames, work benches, equipment stands and other structures. The name derives, first, from the use of elongated slots punched into the metal at uniform intervals to enable assembly of structures fixed with nuts and bolts, and second, from the longitudinal folding of the metal strips to form a right angle.
Invention
Prototype slotted angle strips were developed by London-based engineer Demetrius Comino in the late 1930s, as he sought alternatives to conventional wooden shelving in his printing works. Comino owned an engineering business, Dexion Ltd, which began production in 1947 and the steel slotted angle strips eventually became known as Dexion.
The prior existence of Meccano prevented a generic patent so Dexion patents were restricted to particular slot and hole configurations, and, seeking to emulate Dexion's success, other UK and European companies began offering different sizes, hole patterns and metal strip thicknesses.
Production
Steel remains the most commonly used slotted angle material, although aluminium alternatives are also available. The product is generally manufactured from sheet metal using machine presses to form the angle and to punch holes through the metal. The strips are normally produced in a variety of standard lengths, and steel versions are often painted or galvanized to protect them from rust.
Construction
To construct items from slotted angle, items can be cut to size (some versions are marked to show the optimum points at which to cut the metal) using special slotted angle cutters or shears, and then fixed with nuts and bolts. Tension plates and other metal strips are also available to add strength to the finished structure.
References
Warehouses
Building materials
Metalworking | Slotted angle | [
"Physics",
"Engineering"
] | 352 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
34,871,920 | https://en.wikipedia.org/wiki/Labelled%20enumeration%20theorem | In combinatorial mathematics, the labelled enumeration theorem is the counterpart of the Pólya enumeration theorem for the labelled case, where we have a set of labelled objects given by an exponential generating function (EGF) g(z) which are being distributed into n slots and a permutation group G which permutes the slots, thus creating equivalence classes of configurations. There is a special re-labelling operation that re-labels the objects in the slots, assigning labels from 1 to k, where k is the total number of nodes, i.e. the sum of the number of nodes of the individual objects. The EGF of the number of different configurations under this re-labelling process is given by
In particular, if G is the symmetric group of order n (hence, |G| = n!), the functions can be further combined into a single generating function:
which is exponential w.r.t. the variable z and ordinary w.r.t. the variable t.
The re-labelling process
We assume that an object of size represented by contains labelled internal nodes, with the labels going from 1 to m. The action of G on the slots is greatly simplified compared to the unlabelled case, because the labels distinguish the objects in the slots, and the orbits under G all have the same size . (The EGF g(z) may not include objects of size zero. This is because they are not distinguished by labels and therefore the presence of two or more of such objects creates orbits whose size is less than .) As mentioned, the nodes of the objects are re-labelled when they are distributed into the slots. Say an object of size goes into the first slot, an object of size into the second slot, and so on, and the total size of the configuration is k, so that
The re-labelling process works as follows: choose one of
partitions of the set of k labels into subsets of size
Now re-label the internal nodes of each object using the labels from the respective subset, preserving the order of the labels. E.g. if the first object contains four nodes labelled from 1 to 4 and the set of labels chosen for this object is {2, 5, 6, 10}, then node 1 receives the label 2, node 2, the label 5, node 3, the label 6 and node 4, the label 10. In this way the labels on the objects induce a unique labelling using the labels from the subset of chosen for the object.
Proof of the theorem
It follows from the re-labelling construction that there are
or
different configurations of total size k. The formula evaluates to an integer because is zero for k < n (remember that g does not include objects of size zero) and when we have and the order of G divides the order of , which is , by Lagrange's theorem. The conclusion is that the EGF of the labelled configurations is given by
This formula could also be obtained by enumerating sequences, i.e. the case when the slots are not being permuted, and by using the above argument without the -factor to show that their generating function under re-labelling is given by . Finally note that every sequence belongs to an orbit of size , hence the generating function of the orbits is given by
References
François Bergeron, Gilbert Labelle, Pierre Leroux, Théorie des espèces et combinatoire des structures arborescentes, LaCIM, Montréal (1994). English version: Combinatorial Species and Tree-like Structures, Cambridge University Press (1998).
Enumerative combinatorics
Theorems in combinatorics
Articles containing proofs | Labelled enumeration theorem | [
"Mathematics"
] | 760 | [
"Theorems in combinatorics",
"Enumerative combinatorics",
"Combinatorics",
"Theorems in discrete mathematics",
"Articles containing proofs"
] |
34,873,594 | https://en.wikipedia.org/wiki/Exsphere%20%28polyhedra%29 | In geometry, the exsphere of a face of a regular polyhedron is the sphere outside the polyhedron which touches the face and the planes defined by extending the adjacent faces outwards. It is tangent to the face externally and tangent to the adjacent faces internally.
It is the 3-dimensional equivalent of the excircle.
The sphere is more generally well-defined for any face which is a regular
polygon and delimited by faces with the same dihedral angles
at the shared edges. Faces of semi-regular polyhedra often
have different types of faces, which define exspheres of different size with each type of face.
Parameters
The exsphere touches the face of the regular polyedron at the center
of the incircle of that face. If the exsphere radius is denoted , the radius of this incircle
and the dihedral angle between the face and the extension of the
adjacent face , the center of the exsphere
is located from the viewpoint at the middle of one edge of the
face by bisecting the dihedral angle. Therefore
is the 180-degree complement of the
internal face-to-face angle.
Tetrahedron
Applied to the geometry of the Tetrahedron of edge length ,
we have an incircle radius (derived by dividing twice the face area through the
perimeter ), a dihedral angle , and in consequence .
Cube
The radius of the exspheres of the 6 faces of the Cube
is the same as the radius of the inscribed
sphere, since and its complement are the same, 90 degrees.
Icosahedron
The dihedral angle applicable to the Icosahedron is derived by
considering the coordinates of two triangles with a common edge,
for example one face with vertices
at
the other at
where is the golden ratio. Subtracting vertex coordinates
defines edge vectors,
of the first face and
of the other. Cross products of the edges of the first face and second
face yield (not normalized) face normal vectors
of the first and
of the second face, using .
The dot product between these two face normals yields the cosine
of the dihedral angle,
For an icosahedron of edge length , the incircle radius of the triangular faces is , and finally the radius of the 20 exspheres
See also
Insphere
External links
Geometry | Exsphere (polyhedra) | [
"Mathematics"
] | 465 | [
"Geometry"
] |
34,874,358 | https://en.wikipedia.org/wiki/Isolation%20valve | An isolation valve is a valve in a fluid handling system that stops the flow of process media to a given location, usually for maintenance or safety purposes. They can also be used to provide flow logic (selecting one flow path versus another), and to connect external equipment to a system. A valve is classified as an isolation valve because of its intended function in a system, not because of the type of the valve itself. Therefore, many different types of valves can be classified as isolation valves.
To easily understand the concept of an isolation valve, one can think of the valves under a kitchen or bathroom sink in a typical household. These valves are normally left open so that the user can control the flow of water with the spigot above the sink, and does not need to reach under the counter to start or stop the water flow. However, if the spigot needs to be replaced (i.e. maintenance needs to take place on the system), the isolation valves are shut to stop the flow of water when the spigot is removed. In this system, the isolation valves and the spigot may even be the same type of valve. However, due to their function they are classified as the isolation valves and, in the case of the spigot, the control valves. As the isolation valve is intended to be operated infrequently and only in the fully on or fully off positions, they are often inferior quality globe valves. These less expensive styles lack a bonnet and stem seal in favor of threading the stem directly into the body. The stem is covered with a rubber washer and metal cap similar in appearance to a gland nut. Because they lack a stem seal they will leak unless fully closed and installed in the correct direction or fully open, causing the disk to compress the top washer against the stem.
Process plant practice
Isolation valves can be in the normally open position (NO) or normally closed (NC). Normally open valves are located between pressure vessels, pumps, compressors, tanks, pressure sensors, liquid level measurement instrumentation and other components and allow fluids to flow between components, or to be connected to sensors. The controlled closure of open valves enables the isolation of plant components for testing or maintenance of equipment, or allows flow of fluid to specific flow paths. Normally closed valves are used to connect fluids and process components to other systems only when required. Vent and drain valves are examples of normally closed valves which are only opened when required to depressurise (vent) or drain fluids from a system.
Isolation valves must effectively stop the passage of fluids. Gate valves, ball valves and plug valves are generally considered to provide tight and effective shut-off. Globe valves and Butterfly valves may not be tight shut-off due to wear on the plug or the seat, or due to their design, and may not be appropriate to provide effective isolation.
Some valves are in a safety critical service and are secured, or otherwise locked, in an open or closed position. Plant shutdown instrumentation must be effectively connected to the plant at all times, therefore the isolation valves associated with such equipment must be secured in the open position to prevent inadvertent movement or closure. Securing mechanisms include car-seals, chain and padlocks and proprietary securing devices. Isolation valves in a flare, relief or vent system must ensure that a flow path is always available to the flare or vent. These valves are secured in the open position (LO). Drain valves that connect a high pressure system to a low pressure drain system are locked in the closed position (LC) to prevent potential over-pressurisation of the drain system. Removal of locks from secured valves is only undertaken in specified and controlled conditions such as under a ‘permit to work’ system. Some relief or pressure relief valves are ‘paired’ to provide a duty and a standby valve, the associated isolation valves are interlocked such that at least one relief valve is connected to the system being protected at all times.
A single valve may provide effective isolation between the live plant and the system being maintained. However, for hazardous systems a more effective means of isolation is required. This may comprise a ‘double block’ consisting of two valves in series. Still more effective is a ‘double block and bleed’ comprising two isolation valves in series plus a bleed valve between them. The bleed valve enables the integrity of the valve on the hazardous side to be monitored.
Common applications
Firewater control
Pipeline safety systems
Residential plumbing systems (both water and gas)
Nuclear reactors
Oil and gas wells
Chemical plant
Oil production plant
Power plant
See also
Valves
Shutdown valve
Globe valve
Ball valve
Gate valve
Plug valve
Butterfly valves
Piping & Instrumentation Diagrams
References
Piping
Valves
Plumbing
Water industry | Isolation valve | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 952 | [
"Hydrology",
"Building engineering",
"Chemical engineering",
"Plumbing",
"Physical systems",
"Construction",
"Valves",
"Hydraulics",
"Water industry",
"Mechanical engineering",
"Piping"
] |
34,874,546 | https://en.wikipedia.org/wiki/C20H21NO | {{DISPLAYTITLE:C20H21NO}}
The molecular formula C20H21NO (molar mass: 291.39 g/mol, exact mass: 291.1623 u) may refer to:
Butinoline (Azulone)
Cotriptyline (SD-2203-01)
Danitracen
JWH-030
PRC200-SS
Molecular formulas | C20H21NO | [
"Physics",
"Chemistry"
] | 83 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
40,143,821 | https://en.wikipedia.org/wiki/Mesozoic%E2%80%93Cenozoic%20radiation | The Mesozoic–Cenozoic Radiation is the third major extended increase of biodiversity in the Phanerozoic, after the Cambrian Explosion and the Great Ordovician Biodiversification Event, which appeared to exceeded the equilibrium reached after the Ordovician radiation. Made known by its identification in marine invertebrates, this evolutionary radiation began in the Mesozoic, after the Permian extinctions, and continues to this date. This spectacular radiation affected both terrestrial and marine flora and fauna, during which the "modern" fauna came to replace much of the Paleozoic fauna. Notably, this radiation event was marked by the rise of angiosperms during the mid-Cretaceous, and the K-Pg extinction, which initiated the rapid increase in mammalian biodiversity.
Causes and significance
The exact causes of this extended increase in biodiversity are still being debated, however, the Mesozoic-Cenozoic radiation has often been related to large-scale paleogeographical changes. The fragmentation of the supercontinent Pangaea has been related to an increase in both marine and terrestrial biodiversity. The link between the fragmentation of supercontinents and biodiversity was first proposed by Valentine and Moores in 1972. They hypothesized that the isolation of terrestrial environments and the partitioning of oceanic water masses, as a result of the breaking up of Pangaea, resulted in an increase in allopatric speciation, which led to an increased biodiversity. These smaller landmasses, while individually being less diverse than a supercontinent, contain a high degree of endemic species, resulting in an overall higher biodiversity than a single landmass of equivalent size. It is therefore argued that, similarly to the Ordovician bio-diversification, the differentiation of biotas along environmental gradients caused by the fragmentation of a supercontinent, was a driving force behind the Mesozoic-Cenozoic radiation.
Part of the dramatic increase in biodiversity during this time was caused by the evolutionary radiation of flowering plants, or angiosperms, during the mid-Cretaceous. Characteristics of this clade associated with reproduction have served as a key innovation for an entire clade, and led to a burst of evolution known as the Cretaceous Terrestrial Revolution. These later diversified further and co-radiated with pollinating insects, increasing biodiversity.
A third factor which played a role in the Mesozoic-Cenozoic radiation was the K-Pg extinction, which marked the end of the dinosaurs and, surprisingly, resulted in a massive increase in biodiversity of terrestrial tetrapods, which can almost entirely be attributed to the radiation of mammals. There are multiple things which could have caused this deviation from the equilibrium, one of which is that the before the K-Pg extinction an equilibrium was reached which limited biodiversity. The extinction event reorganized the fundamental ecology, on which diversity is built and maintained. After these reorganized ecosystems stabilized a new, higher, equilibrium was reached, which was maintained during the Cenozoic. Cenozoic biodiversity reached a peak twice as high as the biodiversity peak during the Palaeozoic.
Pull of the recent
One effect which has to be taken into account when estimating past biodiversity levels is the pull of the recent, which describes a phenomenon in the fossil record which causes biodiversity estimates to be skewed towards modern taxa. This bias towards recent taxa is caused by a better availability of more recent fossil records. In mammals it has also been argued that the complexity of teeth, allowing for precise taxonomic identification of fragmentary fossils, increases their perceived diversity when compared to other clades at the time. The contribution of this effect to the apparent increase in biodiversity is still unclear and heavily debated.
See also
Evolutionary faunas, the third of which corresponds to the Mesozoic–Cenozoic radiation
References
Evolutionary biology
Mesozoic events
Cenozoic events
Biogeography | Mesozoic–Cenozoic radiation | [
"Biology"
] | 794 | [
"Evolutionary biology",
"Biogeography"
] |
40,145,070 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28bit%20rate%29 | An order of magnitude is generally a factor of ten. A quantity growing by four orders of magnitude implies it has grown by a factor of 10000 or 104. However, because computers are binary, orders of magnitude are sometimes given as powers of two.
This article presents a list of multiples, sorted by orders of magnitude, for bit rates measured in bits per second. Since some bit rates may measured in other quantities of data or time (like MB/s), information to assist with converting to and from these formats is provided. This article assumes the following:
A group of 8 bits (8 bit) constitutes one byte (1 B). The byte is the most common unit of measurement of information (megabyte, mebibyte, gigabyte, gibibyte, etc.).
The decimal SI prefixes kilo, mega etc., are powers of 10. The power of two equivalents are the binary prefixes kibi, mebi, etc.
Accordingly:
1 kB (kilobyte) = 1000 bytes = 8000 bits
1 KiB (kibibyte) = 210 bytes = 1024 bytes = 8192 bits
1 kbit (kilobit) = 125 bytes = 1000 bits
1 Kibit (kibibit) = 210 bits = 1024 bits = 128 bytes
See also
Data-rate units
List of interface bit rates
Spectral efficiency
Orders of magnitude (data)
Orders of magnitude (time)
References
Bit rate | Orders of magnitude (bit rate) | [
"Mathematics"
] | 297 | [
"Quantity",
"Orders of magnitude",
"Units of measurement"
] |
40,147,217 | https://en.wikipedia.org/wiki/Bipolar%20orientation | In graph theory, a bipolar orientation or st-orientation of an undirected graph is an assignment of a direction to each edge (an orientation) that causes the graph to become a directed acyclic graph with a single source s and a single sink t, and an st-numbering of the graph is a topological ordering of the resulting directed acyclic graph.
Definitions and existence
Let G = (V,E) be an undirected graph with n = |V| vertices. An orientation of G is an assignment of a direction to each edge of G, making it into a directed graph. It is an acyclic orientation if the resulting directed graph has no directed cycles. Every acyclically oriented graph has at least one source (a vertex with no incoming edges) and at least one sink (a vertex with no outgoing edges); it is a bipolar orientation if it has exactly one source and exactly one sink. In some situations, G may be given together with two designated vertices s and t; in this case, a bipolar orientation for s and t must have s as its unique source and t as its unique sink.
An st-numbering of G (again, with two designated vertices s and t) is an assignment of the integers from 1 to n to the vertices of G, such that
each vertex is assigned a distinct number,
s is assigned the number 1,
t is assigned the number n, and
if a vertex v is assigned the number i with 1 < i < n, then at least one neighbor of v is assigned a smaller number than i and at least one neighbor of v is assigned a larger number than i.
A graph has a bipolar orientation if and only if it has an st-numbering. For, if it has a bipolar orientation, then an st-numbering may be constructed by finding a topological ordering of the directed acyclic graph given by the orientation, and numbering each vertex by its position in the ordering. In the other direction, every st-numbering gives rise to a topological ordering, in which each edge of G is oriented from its lower-numbered endpoint to its higher-numbered endpoint. In a graph containing edge st, an orientation is bipolar if and only if it is acyclic and the orientation formed by reversing edge st is totally cyclic.
A connected graph G, with designated vertices s and t, has a bipolar orientation and an st-numbering if and only if the graph formed from G by adding an edge from s to t is 2-vertex-connected. In one direction, if this graph is 2-vertex-connected, then a bipolar orientation may be obtained by consistently orienting each ear in an ear decomposition of the graph. In the other direction, if the graph is not 2-vertex-connected, then it has an articulation vertex v separating some biconnected component of G from s and t. If this component contains a vertex with a lower number than v, then the lowest-numbered vertex in the component cannot have a lower-numbered neighbor, and symmetrically if it contains a vertex with a higher number than v then the highest-numbered vertex in the component cannot have a higher-numbered neighbor.
Applications to planarity
formulated st-numberings as part of a planarity testing algorithm, and formulated bipolar orientations as part of an algorithm for constructing tessellation representations of planar graphs.
A bipolar orientation of a planar graph results in an st-planar graph, a directed acyclic planar graph with one source and one sink. These graphs are of some importance in lattice theory as well as in graph drawing: the Hasse diagram of a two-dimensional lattice is necessarily st-planar, and every transitively reduced st-planar graph represents a two-dimensional lattice in this way. A directed acyclic graph G has an upward planar drawing if and only if G is a subgraph of an st-planar graph.
Algorithms
It is possible to find an st-numbering, and a bipolar orientation, of a given graph with designated vertices s and t, in linear time using depth-first search. The algorithm of uses a depth-first search that starts at vertex s and first traverses edge st. As in the depth-first-search based algorithm for testing whether a graph is biconnected, this algorithm defines pre(v), for a vertex v, to be the preorder number of v in the depth-first traversal, and low(v) to be the smallest preorder number that can be reached by following a single edge from a descendant of v in the depth-first search tree. Both of these numbers may be computed in linear time as part of the depth-first search. The given graph will be biconnected (and will have a bipolar orientation) if and only if t is the only child of s in the depth-first search tree and low(v) < pre(v) for all vertices v other than s. Once these numbers have been computed, Tarjan's algorithm performs a second traversal of the depth-first search tree, maintaining a number sign(v) for each vertex v and a linked list of vertices that will eventually list all vertices of the graph in the order given by an st-numbering. Initially, the list contains s and t, and sign(s) = −1. When each vertex v is first encountered by this second traversal, v is inserted into the list, either before or after its parent p(v) in the depth-first search tree according to whether sign(low(v)) is negative or positive respectively; then sign(p(v)) is set to −sign(low(v)). As Tarjan shows, the vertex ordering resulting from this procedure gives an st-numbering of the given graph.
Alternatively, efficient sequential and parallel algorithms may be based on ear decomposition. While the DFS-based algorithms above depend inherently on the special open ear decomposition caused by the underlying DFS-tree, the open ear decomposition here may be arbitrary. This more general approach is actually used by several applications, e.g. for computing (edge-)independent spanning trees. An open ear decomposition exists if and only if the graph formed from the given graph by adding an edge st is biconnected (the same condition as the existence of a bipolar orientation), and it can be found in linear time. An st-orientation (and thus also an st-numbering) may be obtained easily by directing each ear in a consistent direction, taking care that if there already exists a directed path connecting the same two endpoints among the edges of previous ears then the new ear must be oriented in the same direction. However, despite the simplicity of this folklore approach, obtaining a linear running time is more involved. Whenever an ear is added, the endpoints of this ear must be checked on reachability, or, equivalently for the st-numbering, which vertex comes first in the preliminary st-numbering before. This obstacle can be solved in worst-case constant time by using the (somewhat involved) order data structure, or by more direct methods. provide a complicated but localized search procedure for determining an appropriate orientation for each ear that (unlike the approach using depth-first search) is suitable for parallel computation.
A modern and simple algorithm that computes st-numberings and -orientations in linear time is given in. The idea of this algorithm is to replace the order data structure by an easy numbering scheme, in which vertices carry intervals instead of st-numbers.
report on algorithms for controlling the lengths of the directed paths in a bipolar orientation of a given graph, which in turn leads to some control over the width and height of certain types of graph drawing.
The space of all orientations
For 3-vertex-connected graphs, with designated vertices s and t, any two bipolar orientations may be connected to each other by a sequence of operations that reverse one edge at a time, at each step maintaining a bipolar orientation. More strongly, for planar 3-connected graphs, the set of bipolar orientations can be given the structure of a finite distributive lattice, with the edge-reversal operation corresponding to the covering relation of the lattice. For any graph with designated source and sink, the set of all bipolar orientations may be listed in polynomial time per orientation.
st-edge-numberings and -orientations
One may construct an ordering that is similar to st-numberings by numbering edges instead of vertices. This is equivalent to st-numbering the line graph of the input graph. Although constructing the line-graph explicitly would take quadratic time, linear-time algorithms for computing an st-edge-numbering and st-edge-orientation of a graph are known.
See also
Convex embedding, a higher-dimensional generalization of bipolar orientations
References
Graph theory objects | Bipolar orientation | [
"Mathematics"
] | 1,816 | [
"Mathematical relations",
"Graph theory objects",
"Graph theory"
] |
40,150,317 | https://en.wikipedia.org/wiki/Codon%20degeneracy | Degeneracy or redundancy of codons is the redundancy of the genetic code, exhibited as the multiplicity of three-base pair codon combinations that specify an amino acid. The degeneracy of the genetic code is what accounts for the existence of synonymous mutations.
Background
Degeneracy of the genetic code was identified by Lagerkvist. For instance, codons GAA and GAG both specify glutamic acid and exhibit redundancy; but, neither specifies any other amino acid and thus are not ambiguous or demonstrate no ambiguity.
The codons encoding one amino acid may differ in any of their three positions; however, more often than not, this difference is in the second or third position. For instance, the amino acid glutamic acid is specified by GAA and GAG codons (difference in the third position); the amino acid leucine is specified by UUA, UUG, CUU, CUC, CUA, CUG codons (difference in the first or third position); and the amino acid serine is specified by UCA, UCG, UCC, UCU, AGU, AGC (difference in the first, second, or third position).
Degeneracy results because there are more codons than encodable amino acids. For example, if there were two bases per codon, then only 16 amino acids could be coded for (4²=16). Because at least 21 codes are required (20 amino acids plus stop) and the next largest number of bases is three, then 4³ gives 64 possible codons, meaning that some degeneracy must exist.
Terminology
A position of a codon is said to be a n-fold degenerate site if only n of four possible nucleotides (A, C, G, T) at this position specify the same amino acid. A nucleotide substitution at a 4-fold degenerate site is always a synonymous mutation with no change on the amino acid.
A less degenerate site would produce a nonsynonymous mutation on some of the substitutions. An example (and the only) 3-fold degenerate site is the third position of an isoleucine codon. AUU, AUC, or AUA all encode isoleucine, but AUG encodes methionine. In computation, this position is often treated as a twofold degenerate site.
A position is said to be non-degenerate if any mutation at this position changes the amino acid. For example, all three positions of methionine's AUG are non-degenerate, because the only codon coding for methionine is AUG. The same goes for tryptophan's UGG.
There are three amino acids encoded by six different codons: serine, leucine, and arginine. Only two amino acids are specified by a single codon each. One of these is the amino-acid methionine, specified by the codon AUG, which also specifies the start of translation; the other is tryptophan, specified by the codon UGG.
Implications
These properties of the genetic code make it more fault-tolerant for point mutations. For example, in theory, fourfold degenerate codons can tolerate any point mutation at the third position, although codon usage bias restricts this in practice in many organisms; twofold degenerate codons can withstand silence mutation rather than Missense or Nonsense point mutations at the third position. Since transition mutations (purine to purine or pyrimidine to pyrimidine mutations) are more likely than transversion (purine to pyrimidine or vice versa) mutations, the equivalence of purines or that of pyrimidines at twofold degenerate sites adds a further fault-tolerance.
A practical consequence of redundancy is that some errors in the genetic code cause only a synonymous mutation, or an error that would not affect the protein because the hydrophilicity or hydrophobicity is maintained by equivalent substitution of amino acids (conservative mutation). For example, a codon of NUN (where N = any nucleotide) tends to code for hydrophobic amino acids, NCN yields amino acid residues that are small in size and moderate in hydropathy, and NAN encodes average size hydrophilic residues. These tendencies may result from the shared ancestry of the aminoacyl tRNA synthetases related to these codons.
These variable codes for amino acids are allowed because of modified bases in the first base of the anticodon of the tRNA, and the base-pair formed is called a wobble base pair. The modified bases include inosine and the Non-Watson-Crick U-G basepair.
See also
Neutral theory of molecular evolution
References
Molecular genetics
Gene expression
Protein biosynthesis | Codon degeneracy | [
"Chemistry",
"Biology"
] | 1,018 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
40,150,331 | https://en.wikipedia.org/wiki/Cytokine%20redundancy | Cytokine redundancy is a term in immunology referring to the phenomenon in which, and the ability of, multiple cytokines to exert similar actions. This phenomenon is largely due to multiple cytokines utilizing common receptor subunits and common intracellular cell signalling molecules/pathways. For instance, a pair of redundant cytokines are interleukin 4 and interleukin 13.
Cytokine redundancy is associated with the term cytokine pleiotropy, which refers to the ability of cytokines to exert multiple actions.
References
Immunology | Cytokine redundancy | [
"Biology"
] | 124 | [
"Immunology"
] |
40,152,905 | https://en.wikipedia.org/wiki/Max%20Bernhard%20Weinstein | Max Bernhard Weinstein (1 September 1852 in Kaunas, Vilna Governorate – 25 March 1918) was a German physicist and philosopher. He is best known as an opponent of Albert Einstein's Theory of Relativity, and for having written a broad examination of various theological theories, including extensive discussion of pandeism.
Born into a Jewish family in Kovno (then Imperial Russia), Weinstein translated James Clerk Maxwell's Treatise on Electricity and Magnetism into German in 1883, and taught courses on electrodynamics at the University of Berlin.
While teaching at the Institute of Physics in the University of Berlin, Weinstein associated with Max Planck, Emil du Bois-Reymond, Hermann von Helmholtz, Ernst Pringsheim Sr., Wilhelm Wien, Carl A. Paalzow of the Technische Hochschule in Berlin Charlottenburg, August Kundt, Werner von Siemens, theologian Adolph von Siemens, historian Theodor Mommsen, and Germanic philologist Wilhelm Scherer.
Criticism of Einstein's theory of relativity
Weinstein was among the first physicists to reject and criticize Albert Einstein's theory of relativity, contending that "general relativity had removed gravity from its earlier isolated position and made it into a "world power" controlling all laws of nature," and warning that "physics and mathematics would have to be revised." It was Weinstein's writings, and their impact driving public sentiment against Einstein's theories, which led astronomer Wilhelm Foerster to convince Einstein to write a more accessible explanation of those ideas. But, one commentator contends that Weinstein's summaries of relativistic physics were "tedious exercises in algebra."
Weinstein argued against relativity in his book Die Physik der bewegten Materie und die Relativitätstheorie, published in 1913.
Philosophical writings
In addition to his work in physics, Weinstein wrote several philosophical works. Welt- und Lebensanschauungen, Hervorgegangen aus Religion, Philosophie und Naturerkenntnis ("World and Life Views, Emerging From Religion, Philosophy and Perception of Nature") (1910) examined the origins and development of a great many philosophical areas, including the broadest and most far-reaching examination of the theological theory of pandeism written up to that point. A critique reviewing Weinstein's work in this field deemed the term pandeism to be an 'unsightly' combination of Greek and Latin, though Weinstein did not coin the term, nor did he claim to have. The reviewer further criticises Weinstein's broad assertions that such historical philosophers as Scotus Erigena, Anselm of Canterbury, Nicholas of Cusa, Giordano Bruno, Mendelssohn, and Lessing all were pandeists or leaned towards pandeism.
Philosophically, Weinstein was attracted to what he called a psychical or spiritual monism, which he believed to be comparable to the pantheism of Spinoza, and wherein the essence of all phenomena could be found entirely in the mind. Though he could see no way around the eventual heat death of the Universe, Weinstein suggested that there existed a fundamental 'psychical energy,' of which a maximum-entropy world would ultimately consist. Weinstein wrote:
From this premise Weinstein reasoned that the world must have both a beginning and an end, and that a supernatural force must have initiated it, and so could bring about its end as well:
Though he rejected theistic formulations regarding such things, Weinstein found the origin of the Universe to be so problematic that he wrote: "As far as I can see, only Spinozist pantheism, among all philosophies, can lead to a satisfactory solution."
Works
Handbuch der physikalischen Maassbestimmungen. Zweiter Band. Einheiten und Dimensionen, Messungen für Längen, Massen, Volumina und Dichtigkeiten, Julius Springer, Berlin 1888
Die philosophischen Grundlagen der Wissenschaften. Vorlesungen gehalten an der Universität Berlin …, B. G. Teubner, Leipzig und Berlin 1906
Welt- und Lebensanschauungen hervorgegangen aus Religion, Philosophie und Naturerkenntnis, Johann Ambrosius Barth, Leipzig 1910
Die Physik der bewegten Materie und die Relativitätstheorie, Barth, Leipzig 1913
Kräfte und Spannungen. Das Gravitations- und Strahlenfeld, Friedr. Vieweg & Sohn, Braunschweig 1914
References
External links
1852 births
1918 deaths
19th-century German non-fiction writers
19th-century German philosophers
19th-century German physicists
20th-century German non-fiction writers
20th-century German philosophers
20th-century German physicists
German Jews
German male non-fiction writers
German male writers
German physicists
Academic staff of the Humboldt University of Berlin
Jewish philosophers
Jewish German physicists
Lithuanian Jews
Pantheists
German philosophers of religion
German philosophers of science
Philosophy writers
Relativity critics | Max Bernhard Weinstein | [
"Physics"
] | 1,062 | [
"Relativity critics",
"Theory of relativity"
] |
21,723,680 | https://en.wikipedia.org/wiki/Input%20offset%20voltage | The input offset voltage () is a parameter defining the differential DC voltage required between the inputs of an amplifier, especially an operational amplifier (op-amp), to make the output zero (for voltage amplifiers, 0 volts with respect to ground or between differential outputs, depending on the output type).
Details
An ideal op-amp amplifies the differential input; if this input difference is 0 volts (i.e. both inputs are at the same voltage), the output should be zero. However, due to manufacturing process, the differential input transistors of real op-amps may not be exactly matched. This causes the output to be zero at a non-zero value of differential input, called the input offset voltage.
Typical values for are around 1 to 10 mV for cheap commercial-grade op-amp integrated circuits (IC). This can be reduced to several microvolts if nulled using the IC's offset null pins or using higher-quality or laser-trimmed devices. However, the input offset voltage value may drift with temperature or age. Chopper amplifiers actively measure and compensate for the input offset voltage, and may be used when very low offset voltages are required.
Input bias current and input offset current also affect the net offset voltage seen for a given amplifier. The voltage offset due to these currents is separate from the input offset voltage parameter and is related to the impedance of the signal source and of the feedback and input impedance networks, such as the two resistors used in the basic inverting and non-inverting amplifier configurations. FET-input op-amps tend to have lower input bias currents than bipolar-input op-amps, and hence incur less offset of this type.
Input offset voltage is symbolically represented by a voltage source that is in series with either the positive or negative input terminal (it is mathematically equivalent either way). Normally input offset voltage is measured in the terms of input voltage applied at the non-inverting terminal to make output zero.
References
External links
Analog Devices tutorial on op-amp input offset voltage and mitigation techniques
Electrical parameters | Input offset voltage | [
"Engineering"
] | 436 | [
"Electrical engineering",
"Electrical parameters"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.