id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
50,564,979 | https://en.wikipedia.org/wiki/Samson%20Shatashvili | Samson Lulievich Shatashvili ( Russian: Самсон Лулиевич Шаташвили, born February 1960) is a theoretical and mathematical physicist who has been working at Trinity College Dublin, Ireland, since 2002. He holds the Trinity College Dublin Chair of Natural Philosophy and is the director of the Hamilton Mathematics Institute. He is also affiliated with the Institut des Hautes Études Scientifiques (IHÉS), where he held the Louis Michel Chair from 2003 to 2013 and the Israel Gelfand Chair from 2014 to 2019. Prior to moving to Trinity College, he was a professor of physics at Yale University from 1994.
Background
Shatashvili received his PhD in 1984 at the Steklov Institute of Mathematics in Saint Petersburg under the supervision of Ludwig Faddeev (and Vladimir Korepin). The topic of his thesis was on gauge theories and had the title "Modern Problems in Gauge Theories". In 1989 he received D.S. degree (doctor of science, 2nd degree in Russia) also at the Steklov Institute of Mathematics in Saint Petersburg.
Contributions and awards
Shatashvili has made several discoveries in the fields of theoretical and mathematical physics. He is mostly known for his work with Ludwig Faddeev on quantum anomalies, with Anton Alekseev on geometric methods in two-dimensional conformal field theories, for his work on background independent open string field theory, with Cumrun Vafa on superstrings and manifolds of exceptional holonomy, with Anton Gerasimov on tachyon condensation, with Andrei Losev, Nikita Nekrasov and Greg Moore on instantons and supersymmetric gauge theories, as well as for his work with Nikita Nekrasov on quantum integrable systems. In particular, Shatashvili and Nikita Nekrasov discovered the gauge/Bethe correspondence. In 1995 he received an Outstanding Junior Investigator Award of the Department of Energy (DOE) and a NSF Career Award and from 1996 to 2000 he was a Sloan Fellow. Shatashvili is the member of the Royal Irish Academy and the recipient of the 2010 Royal Irish Academy Gold Medal as well as the Ivane Javakhishvili State Medal, Georgia. In 2009 he was a plenary speaker at the International Congress on Mathematical Physics in Prague and in 2014 was an invited speaker at the International Congress of Mathematicians in Seoul (speaking on "Gauge theory angle at quantum integrability").
References
External links
Videos of Samson Shatashvili in the AV-Portal of the German National Library of Science and Technology
American mathematicians
Russian mathematicians
Soviet mathematicians
21st-century Irish mathematicians
21st-century mathematicians from Georgia (country)
Mathematical physicists
Theoretical physicists
String theorists
Academics of Trinity College Dublin
Year of birth missing (living people)
Living people
Members of the Royal Irish Academy | Samson Shatashvili | [
"Physics"
] | 585 | [
"Theoretical physics",
"Theoretical physicists"
] |
50,567,542 | https://en.wikipedia.org/wiki/NGC%205238 | NGC 5238 is an irregular galaxy in the constellation Canes Venatici. Located at a comoving distance of 4.51 Mpc, it is 64.4 arcseconds in diameter. It has sometimes been classified as a blue compact dwarf galaxy. Although some authors have hypothesized it to be a member of the M101 Group of galaxies, it is currently believed to be an isolated galaxy.
At an inclination of 39° with respect to Earth, NGC 5238 has a total mass of 117 million solar masses, with a star formation rate of 0.01 solar masses per year. Of the total mass, HI gas appears to account for 26 million solar masses.
Classification
In 1977, NGC 5238 was hypothesized to not be a single galaxy, but rather a pair of interacting galaxies. It was not until ten years later that a dedicated study of the galaxy's rotation curve was undertaken, showing that the galaxy is indeed a single galaxy. One of the two regions that was thought to be the nucleus of a galaxy was instead shown to be simply a large HII region around 100 pc in diameter.
The morphological type of NGC 5238 has been the subject of some controversy. In 1979, the galaxy was classified as a barred spiral galaxy. Soon after, in 1984, the galaxy was included in a study of blue compact dwarf galaxies, incompatible with the classification of a barred spiral. However, the barred spiral classification was considered the correct classification for years. It was not until the mid 1990s that the galaxy was first recognized as a dwarf irregular galaxy. Even after this, the majority of studies recognized the galaxy as a spiral galaxy until 2015, when the classification of irregular finally became widely accepted
Appearance
As it appears to us, NGC 5238 is tilted at an inclination of 39°. This 2013 estimate follows previous estimates of 30° in 1987, 37 ± 5° in 1992, and 47° in 1999 In the Spitzer 3.6 μm band, the semimajor axis of its angular size is 64.4", with an ellipticity of 0.201.
Distance
The distance estimate to NGC 5238 has been brought down considerably since first calculated. The first published distance estimate was 7 Mpc, derived using redshift. This remained the predominant estimate until 1996, when the distance was found to be much less, estimated at 5.18 Mpc. Subsequently, using spectral data from the HI 21cm line, the distance was calculated to be 4.7 Mpc in 1999, although an updated HI study found a slightly higher value at 5.20 Mpc in 2002. Five years later, in 2007, the distance estimate was lowered even further to 4.50 Mpc, extremely close to today's accepted value.
One way to determine distance unambiguously is by standard candles. The tip of the red giant branch is such a method; every galaxy's brightest red giant stars must have exactly the same known luminosity. When combined with corrections for interstellar reddening, this allows for accurate determination of a galaxy's distance. By 2009, a Hubble Space Telescope image of NGC 5238 had become available, resolving the individual stars within the galaxy. Using this method, the distance modulus was calculated at 28.27 magnitudes, corresponding to a distance of 4.51 Mpc, today's accepted value.
Radio emission
Since a first study was published in 1986, the neutral hydrogen gas of NGC 5238 and its associated 21 cm line have been the subject of many studies. The first study calculated the total HI 21 cm flux from the galaxy to be 4.5 ± 1.0 Jy·km/s, with a full width at half maximum of 28 km/s and a maximum flux density of 0.25 ± 0.011 Jy. Two years later, the 20% line width was calculated at two conflicting values from two studies: 47 km/s and 65 km/s. From the HI line data, the total mass to HI mass ratio was calculated to be 0.384 and the pseudo HI surface density was estimated to be 9.7 solar masses per square parsec. Another two years later, another estimate for the 20% and 50% line widths was published, calculating 36 ± 4 km/s at 50% and 49 ± 4 at 20%.
In 1999, the 50% line width was further refined to 32 ± 4 km/s, then 36 km/s. The second study, in addition to deriving the 5.20 Mpc distance quoted above, found a total HI mass of 4.2 solar masses. Finally, in 2013, the 50% line width estimate was further increased to 40 km/s, and the HI mass was refined to 2.6 solar masses, implying a total-to-HI mass ratio of 7.3.
In addition to HI gas, it is thought that radio continuum emission should be present from NGC 5238 as well. The galaxy is a strong ultraviolet emitter, indicating that the galaxy is undergoing rapid star formation. Based on this, it is to be expected that there should be radio continuum emission from the galaxy, due to the acceleration of electrons in HII regions, known as bremsstrahlung. However, such emission has not been found in NGC 5238, contradicting models. To resolve this mystery, it has been hypothesized that the star formation has subsided recently enough that the UV excess from massive stars is still present, but the hydrogen has already recombined.
Image gallery
References
External links
Irregular galaxies
Canes Venatici
5238
08565
M101 Group
Markarian galaxies | NGC 5238 | [
"Astronomy"
] | 1,155 | [
"Canes Venatici",
"Constellations"
] |
50,568,976 | https://en.wikipedia.org/wiki/Spaun%20%28Semantic%20Pointer%20Architecture%20Unified%20Network%29 | Spaun ("Semantic Pointer Architecture Unified Network") is a cognitive architecture pioneered by Chris Eliasmith of the University of Waterloo Centre for Theoretical Neuroscience. It consists of 2.5 million simulated neurons organized into subsystems that resemble specific brain regions, such as the prefrontal cortex, basal ganglia, and thalamus. It can recognize numbers, remember them, figure out numeric sequences, and even write them down with a robotic arm. It is implemented using Nengo.
References
External links
Spaun version 2.0 source code
Cognitive architecture | Spaun (Semantic Pointer Architecture Unified Network) | [
"Engineering"
] | 112 | [
"Artificial intelligence engineering",
"Cognitive architecture"
] |
50,572,027 | https://en.wikipedia.org/wiki/Nanocem | Nanocem is a consortium of academic and private industry groups, founded in 2004 and headquartered in Lausanne, Switzerland. The consortium researches the properties of cement and concrete on the nano- and micro-scales, with a particular focus on reducing carbon dioxide emissions at all stages of production. , Nanocem includes 34 organizations and supports more than 120 researchers.
Description
Nanocem is a consortium of academic and private industry groups that researches the properties of cement and concrete on the nano- and micro-scales. The research has a particular focus on reducing carbon dioxide emissions at all stages of production. The consortium is headquartered in Lausanne, Switzerland. , it includes 34 organizations and supports more than 120 researchers. There are some 60 doctoral and postdoctoral research projects in the area of fundamental research that have been supported by Nanocem.
The research is conducted at a fundamental level, though high levels of industry involvement allows for focus on solutions that can work in practice and not just in theory. This model of cooperation between industry and the academic community has led to the identification of common issues, shared knowledge, and clear benefits for all those involved. For instance, Nanocem has been able to help map the research needs for lower carbon concrete. This guidance helped focus research by companies and third parties.
History
Nanocem was founded as an independent consortium in 2004 after a rejection of a 2002 bid to the Network of Excellence (European Framework Programme).
Nanocem's eleven completed core projects have included studies of interactions between admixtures and cement, concrete durability, the kinetics of cement hydration, and the use of magnetic resonance imaging techniques in concrete analysis. Recent Nanocem-sponsored projects have included the use of nanotechnology in cementitious materials, the effects of sulfate on concrete, the development of a bipolar mineral organic composite that can bond with Portland cement on one pole and polymerize with the other, and studies of cement hydration at the molecular level. Its research has led to more than one hundred published papers and conference papers. There are some 120 academic researchers in the team who between them are in the process of managing some 60 PhD and PostDoctoral research projects in the area of fundamental research.
Participating organizations
Nanocem consists of 34 academic and private industry partners. The members of Nanocem collectively have access to a large range of state of the art equipment for the study of cementitious materials.
Academic
Aarhus University
Bauhaus-Universität Weimar
Czech Technical University in Prague
Danish Technological Institute
École polytechnique fédérale de Lausanne
ETH Zurich
Eduardo Torroja Institute for Construction Science
French Alternative Energies and Atomic Energy Commission
French institute of science and technology for transport, spatial planning, development and networks
Imperial College London
Lund University
Norwegian University of Science and Technology
Polytechnic University of Catalonia
Slovenian National Building and Civil Engineering Institute
Swiss Federal Laboratories for Materials Science and Technology
Technical University of Denmark
Technical University of Munich
University of Aberdeen
University of Burgundy
University of Leeds
University of Sheffield
University of Surrey
Vienna University of Technology
Industrial
Aalborg Portland
BASF
CHRYSO
CRH
HeidelbergCement
GCP Applied Technologies
LafargeHolcim
Siam Cement
Sika AG
Titan Cement
References
External links
List of Nanocem publications and conference papers
2004 establishments in Switzerland
Concrete | Nanocem | [
"Engineering"
] | 653 | [
"Structural engineering",
"Concrete"
] |
32,059,473 | https://en.wikipedia.org/wiki/DKH | Degrees of german carbonate hardness (°dKH or ; the dKH is from the German deutsche Karbonathärte) is a unit of water hardness, specifically for temporary or carbonate hardness. Carbonate hardness is a measure of the concentration of carbonates such as calcium carbonate (CaCO3) and magnesium carbonate (MgCO3) per volume of water. As a unit 1 dKH is the same as 1 °dH which is equal to approximately 0.1786 mmol/L or 17.86 milligrams (mg) of calcium carbonate per litre of water, i.e. 17.86 ppm.
The measurements of total hardness (German Gesamthärte (GH)) and carbonate hardness (German Karbonathärte (KH)) are sometimes stated with units dKH and dGH to differentiate them from one another, although in both cases the unit they are measured in is German degrees (°dH).
See also
Carbonate hardness
Hard water
dGH
External links
Water Hardness definitions
Convertor for Hardness of water
What is Temporary Hardness
Water chemistry
Units of chemical measurement
Water quality indicators | DKH | [
"Chemistry",
"Mathematics",
"Environmental_science"
] | 226 | [
"Quantity",
"Chemical quantities",
"Water pollution",
"Water quality indicators",
"nan",
"Units of chemical measurement",
"Units of measurement"
] |
32,065,903 | https://en.wikipedia.org/wiki/Bacterial%20genetics | Bacterial genetics is the subfield of genetics devoted to the study of bacterial genes. Bacterial genetics are subtly different from eukaryotic genetics, however bacteria still serve as a good model for animal genetic studies. One of the major distinctions between bacterial and eukaryotic genetics stems from the bacteria's lack of membrane-bound organelles (this is true of all prokaryotes. While it is a fact that there are prokaryotic organelles, they are never bound by a lipid membrane, but by a shell of proteins), necessitating protein synthesis occur in the cytoplasm.
Like other organisms, bacteria also breed true and maintain their characteristics from generation to generation, yet at the same time, exhibit variations in particular properties in a small proportion of their progeny. Though heritability and variations in bacteria had been noticed from the early days of bacteriology, it was not realised then that bacteria too obey the laws of genetics. Even the existence of a bacterial nucleus was a subject of controversy. The differences in morphology and other properties were attributed by Nageli in 1877, to bacterial pleomorphism, which postulated the existence of a single, a few species of bacteria, which possessed a protein capacity for a variation. With the development and application of precise methods of pure culture, it became apparent that different types of bacteria retained constant form and function through successive generations. This led to the concept of monomorphism.
Transformation
Transformation in bacteria was first observed in 1928 by Frederick Griffith and later (in 1944) examined at the molecular level by Oswald Avery and his colleagues who used the process to demonstrate that DNA was the genetic material of bacteria. In transformation, a cell takes up extraneous DNA found in the environment and incorporates it into its genome (genetic material) through recombination. Not all bacteria are competent to be transformed, and not all extracellular DNA is competent to transform. To be competent to transform, the extracellular DNA must be double-stranded and relatively large. To be competent to be transformed, a cell must have the surface protein Competent Factor', which binds to the extracellular DNA in an energy requiring reaction. However bacteria that are not naturally competent can be treated in such a way to make them competent, usually by treatment with calcium chloride, which make them more permeable.
Bacterial conjugation
Bacterial conjugation is the transfer of genetic material (plasmid) between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. Discovered in 1946 by Joshua Lederberg and Edward Tatum, conjugation is a mechanism of horizontal gene transfer as are transformation and transduction although these two other mechanisms do not involve cell-to-cell contact.
Bacterial conjugation is often regarded as the bacterial equivalent of sexual reproduction or mating since it involves the exchange of genetic material. During conjugation the donor cell provides a conjugative or mobilizable genetic element that is most often a plasmid or transposon.[4][5] Most conjugative plasmids have systems ensuring that the recipient cell does not already contain a similar element.
The genetic information transferred is often beneficial to the recipient. Benefits may include antibiotic resistance, xenobiotic tolerance or the ability to use new metabolites.[6] Such beneficial plasmids may be considered bacterial endosymbionts. Other elements, however, may be viewed as bacterial parasites and conjugation as a mechanism evolved by them to allow for their spread.
See also
Microbial genetics
Ebola virus genetics
References | Bacterial genetics | [
"Biology"
] | 733 | [
"Bacterial genetics",
"Genetics by type of organism",
"Bacteria"
] |
32,067,452 | https://en.wikipedia.org/wiki/Pickands%E2%80%93Balkema%E2%80%93De%20Haan%20theorem | The Pickands–Balkema–De Haan theorem gives the asymptotic tail distribution of a random variable, when its true distribution is unknown. It is often called the second theorem in extreme value theory. Unlike the first theorem (the Fisher–Tippett–Gnedenko theorem), which concerns the maximum of a sample, the Pickands–Balkema–De Haan theorem describes the values above a threshold.
The theorem owes its name to mathematicians James Pickands, Guus Balkema, and Laurens de Haan.
Conditional excess distribution function
For an unknown distribution function of a random variable , the Pickands–Balkema–De Haan theorem describes the conditional distribution function of the variable above a certain threshold . This is the so-called conditional excess distribution function, defined as
for , where is either the finite or infinite right endpoint of the underlying distribution . The function describes the distribution of the excess value over a threshold , given that the threshold is exceeded.
Statement
Let be the conditional excess distribution function. Pickands, Balkema and De Haan posed that for a large class of underlying distribution functions , and large , is well approximated by the generalized Pareto distribution, in the following sense. Suppose that there exist functions , with such that as converge to a non-degenerate distribution, then such limit is equal to the generalized Pareto distribution:
,
where
, if
, if
Here σ > 0, and y ≥ 0 when k ≥ 0 and 0 ≤ y ≤ −σ/k when k < 0. These special cases are also known as
Exponential distribution with mean , if k = 0,
Uniform distribution on , if k = -1,
Pareto distribution, if k > 0.
The class of underlying distribution functions are related to the class of the distribution functions satisfying the Fisher–Tippett–Gnedenko theorem.
Since a special case of the generalized Pareto distribution is a power-law, the Pickands–Balkema–De Haan theorem is sometimes used to justify the use of a power-law for modeling extreme events.
The theorem has been extended to include a wider range of distributions. While the extended versions cover, for example the normal and log-normal distributions, still continuous distributions exist that are not covered.
See also
Stable distribution
References
Probability theorems
Extreme value data
Tails of probability distributions | Pickands–Balkema–De Haan theorem | [
"Mathematics"
] | 480 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
32,068,219 | https://en.wikipedia.org/wiki/Nottingham%20Asphalt%20Tester | The Nottingham Asphalt Tester (NAT) is equipment used for rapid determination of modulus, permanent deformation and fatigue of bituminous mixtures. It uses cylindrical specimens that are cored from the highway or prepared in laboratory.
These mechanical properties are essential to people involved in the production of roads and the development of materials used in road construction. NATs are used across the world by materials testing laboratories, universities, oil companies, regional laboratories, contractors and consulting engineers.
The NAT was invented in the 1980s at the University of Nottingham by Keith Cooper, who later founded Cooper Research Technology Ltd.
References
Asphalt
Construction equipment
Materials testing
Pavement engineering
Pavements
University of Nottingham | Nottingham Asphalt Tester | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 132 | [
"Construction equipment",
"Unsolved problems in physics",
"Materials science",
"Construction",
"Materials testing",
"Civil engineering",
"Chemical mixtures",
"Civil engineering stubs",
"Asphalt",
"Amorphous solids",
"Industrial machinery"
] |
32,070,490 | https://en.wikipedia.org/wiki/AEC%20Daily | AEC Daily is an e-learning platform for architects, engineers, and other construction professionals. AEC Daily works with building product manufacturers to offer continuing education courses in a variety of subjects. AEC Daily Inc. is based in Newmarket, Ontario, Canada and AEC Daily Corporation is based in Buffalo, New York United States.
History
AEC Daily was founded by Jeff Rice and Stéphane Deschênes in 2001. They are still the owners of the company.
Continuing Education
Online education courses are the main field of AEC Daily. They provide over 1000 courses for construction professionals, architects, engineers, designers and landscapers. Most courses are free because they are sponsored by building product manufacturers such as Kohler Kitchen and Bath, Makita, and Wine Cellar Innovations.
Building Products/Services
The Building Products/Services directory is administrated by AEC Daily and comprises several thousand companies. It is subdivided into the main categories Software, Hardware, Support Services, firms and Associations.
Associations
AEC Daily works closely with many different associations to offer only courses that meet the requirements or have been reviewed and approved by them. Some of these associations include:
In the United States: American Institute of Architects, U.S. Green Building Council, Construction Specifications Institute, Society of American Registered Architects.
In Australia: Australian Institute of Landscape Architects, Building Designers Association of Australia.
In Canada: Royal Architectural Institute of Canada.
In Europe: Association of Architects of Milan, Royal Institute of British Architects, Netherlands Architecture Institute.
References
Architectural education
Education companies established in 2001
Companies based in Newmarket, Ontario | AEC Daily | [
"Engineering"
] | 316 | [
"Architectural education",
"Architecture"
] |
26,425,029 | https://en.wikipedia.org/wiki/DOCK | The program UCSF DOCK was created in the 1980s by Irwin "Tack" Kuntz's Group, and was the first docking program. DOCK uses geometric algorithms to predict the binding modes of small molecules. Brian K. Shoichet, David A. Case, and Robert C.Rizzo are codevelopers of DOCK.
Two versions of the docking program are actively developed: DOCK 6 and DOCK 3.
Ligand sampling methods used by the program DOCK include.
Rigid docking: shape matching, uses spheres placed in the pocket and performs bipartite matching between those spheres and the molecule (all versions).
Flexible ligand is accounted for using the following methods: an algorithm called anchor and grow (v4-v6), and hierarchical docking of databases (v3.5-3.7).
A molecular dynamics engine was implemented into DOCK v6 by David A. Case's Group in the scoring function AMBER score. This ability accounts for receptor flexibility and allows for rank ordering by energetic ensembles in the docking calculations.
See also
AutoDock
Molecular modelling
Comparison of software for molecular mechanics modeling
References
External links
Molecular modelling software
Molecular modelling | DOCK | [
"Chemistry"
] | 226 | [
"Molecular modelling software",
"Molecular physics",
"Computational chemistry software",
"Theoretical chemistry",
"Molecular modelling"
] |
26,427,866 | https://en.wikipedia.org/wiki/Process%20variation%20%28semiconductor%29 | Process variation is the naturally occurring variation in the attributes of transistors (length, widths, oxide thickness) when integrated circuits are fabricated. The amount of process variation becomes particularly pronounced at smaller process nodes (<65 nm) as the variation becomes a larger percentage of the full length or width of the device and as feature sizes approach the fundamental dimensions such as the size of atoms and the wavelength of usable light for patterning lithography masks.
Process variation causes measurable and predictable variance in the output performance of all circuits but particularly analog circuits due to mismatch. If the variance causes the measured or simulated performance of a particular output metric (bandwidth, gain, rise time, etc.) to fall below or rise above the specification for the particular circuit or device, it reduces the overall yield for that set of devices.
History
The first mention of variation in semiconductors was by William Shockley, the co-inventor of the transistor, in his 1961 analysis of junction breakdown.
An analysis of systematic variation was performed by Schemmert and Zimmer in 1974 with their paper on threshold-voltage sensitivity. This research looked into the effect that the oxide thickness and implantation energy had on the threshold voltage of MOS devices.
Sources of variations include:
gate oxide thickness,
random dopant fluctuations, and
device geometry and lithography in nanometer region.
Characterization
Semiconductor foundries run analyses on the variability of attributes of transistors (length, width, oxide thickness, etc.) for each new process node. These measurements are recorded and provided to customers such as fabless semiconductor companies. This set of files are generally referred to as "model files" in the industry and are used by EDA tools for simulation of designs.
FEOL
Typically process models (example HSPICE) include process corners based on Front End Of Line conditions. These often are centered at a typical or nominal point and will also contain Fast and Slow corners often separated into Ntype and Ptype corners that affect the non-linear active N+ / P+ devices in different ways. Examples are TT for nominal N+ and P+ transistors, FF for fast N+ and P+ transistors, FS for fast N+ and slow P+ transistors, etc.
BEOL
When modeling the parasitic wiring an orthogonal set of process corners is often supplied with the parasitic extraction deck. (Example STAR-RC extraction deck). These corners are usually listed as Typical/Nominal for the target value and Cbest / Cworst corners for the variations in: conductor thickness, conductor width, and conductor oxide thickness that result in the Least / Most capacitance on the wiring. Often an additional corner called RCbest and RCworst is supplied that picks the conductor parameters that result in the Best (lowest) and worst (highest) wiring resistance for thickness and width, and then adds the oxide thickness that adds the Best (lowest) and Worst (highest) capacitance due to the oxide thickness as this value is not directly correlated to wiring resistance.
Workarounds & Solutions
Statistical Analysis
Designers using this approach run from tens to thousands of simulations to analyze how the outputs of the circuit will behave according to the measured variability of the transistors for that particular process. The measured criteria for transistors are recorded in model files given to designers for simulating their circuits before simulation.
The most basic approach used by designers is increasing the size of devices which are sensitive to mismatch.
Topology Optimization
This is used to reduce variation due to polishing, etc.
Patterning Techniques
To reduce roughness of line edges, advanced lithography techniques are used.
See also
Semiconductor fabrication
Transistor models
References
External links
CMOS process variations: are they inevitable, or a symptom or immaturity?
Process Variations: A Critical Operation Point Hypothesis
Semiconductor device fabrication | Process variation (semiconductor) | [
"Materials_science"
] | 783 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
26,434,013 | https://en.wikipedia.org/wiki/Superstripes | Superstripes is a generic name for a phase with spatial broken symmetry that favors the onset of superconducting or superfluid
quantum order. This scenario emerged in the 1990s when non-homogeneous metallic heterostructures at the atomic limit with a broken spatial symmetry have been found to favor superconductivity. Before a broken spatial symmetry was expected to compete and suppress the superconducting order. The driving mechanism for the amplification of the superconductivity critical temperature in superstripes matter has been proposed to be the shape resonance in the energy gap parameters ∆n that is a type of Fano resonance for coexisting condensates.
The superstripes show multigap superconductivity near a 2.5 Lifshitz transition where the renormalization of chemical potential at the metal-to-superconductor transition is not negligeable and the self-consistent solution of the gaps equation is required. The superstripes lattice scenario is made of puddles of multigap superstripes matter forming a superconducting network where different gaps are not only different in different portions of the k-space but also in different portions of the real space with a complex scale free distribution of Josephson junctions.
History
The term superstripes was introduced in 2000 at the international conference on "Stripes and High Tc Superconductivity" held in Rome to describe the particular phase of matter where a broken symmetry appearing at a transition from a phase with higher dimensionality N (3D or 2D) to a phase with lower dimensionality N-1 (2D or 1D) favors the superconducting or superfluid phase and it could increase the normal to superconducting transition temperature with the possible emergence of high-temperature superconductivity. The term superstripes scenario was introduced to make the key difference with the stripes scenario where the phase transition from a phase with higher dimensionality N (like a 2D electron gas) to the phase with broken symmetry and lower dimensionality (like a quasi 1D striped fluid) competes and suppresses the transition temperature to the superfluid phase and favors modulated striped magnetic ordering. In the broken symmetry of superstripes phase the structural modulation coexists and favors high-temperature superconductivity.
Heterostructures at atomic limit
The prediction of high-temperature superconductivity transition temperatures is rightly considered to be one of the most difficult problems in theoretical physics. The problem remained elusive for many years since these materials have generally a very complex structure making unuseful theoretical modelling for a homogeneous system. The advances in experimental investigation on local lattice fluctuations have driven the community to the conclusion that it is a problem of quantum physics in complex matter. A growing paradigm for high-temperature superconductivity in superstripes is that a key term is the quantum interference effect between pairing channels, i.e., a resonance in the exchange-like, Josephson-like pair transfer term between different condensates. The quantum configuration interaction between different pairing channels is a particular case of shape resonance belonging to the group of Fano Feshbach resonances in atomic and nuclear physics. The critical temperature shows a suppression, due to a Fano antiresonance, when the chemical potential is tuned at a band edge where a new Fermi surface spot appears i.e., an "electronic topological transition" (ETT) or 2.5 Lifshitz transition or, a metal-to-metal topological transition. The Tc amplification is switched on when the chemical potential is tuned above the band edge in an energy region away from the band edge of the order of 1 or 2 times the energy cut off of the pairing interaction. The Tc is further amplified at the shape resonance if in this range the Fermi surface of the appearing fermi surface spot changes its dimensionality (for example the Lifshitz transition for opening a neck in a tubular Fermi surface).
The tuning of the chemical potential at the shape resonance can be obtained by changing: the charge density and/or the superlattice structural parameters, and/or the superlattice misfit strain and/or the disorder. Direct evidence for shape resonances in superstripes matter is provided by the anomalous variation of the isotope effect on the critical temperature by tuning the chemical potential.
Materials
It was known that the high-temperature cuprate superconductors have a complex lattice structure. In 1993 it was proposed that these materials belong to a particular class of materials called heterostructures at atomic limit made of a superlattice of superconducting atomic layers intercalated by a different material with the role of spacer.
All new high-temperature superconducting materials discovered in the years 2001–2013 are heterostructures at atomic limit made of the active atomic layers: honeycomb boron layer in diborides, graphene in intercalated graphite, CoO2 atomic bbc monolayers in cobaltates, FeAs atomic fluorite monolayers in pnictides, FeSe atomic fluorite monolayers in selenides.
In these materials the joint effect of (a) increasing the lattice misfit strain to a critical value, and (b) tuning the chemical potential near a Lifshitz transition in presence of electron-electron interactions induce a lattice instability with formation of the network of superconducting striped puddles in an insulating or metallic background.
This complex scenario has been called "superstripes scenario" where the 2D atomic layers show functional lattice inhomogeneities: "ripples puddles" of local lattice distortion have been observed in La2CuO4+y in Bi222; striped puddles of ordered dopants in the spacer layers have been seen in superoxygenated La2CuO4 and in YBaCuO The network of superconducting striped puddles has been found also in MFeAs pnictides and recently in KFeSe selenides
Self-organization of lattice defects can be controlled by strain engineering. and photoinduced effects.
Bose–Einstein condensates
Superstripes (also called stripe phase) can also form in Bose–Einstein condensates (BEC) with spin–orbit coupling. The spin–orbit coupling is achieved by selecting 2 spin states from the manifold of hyperfine states to couple with a two photon process. For weak coupling, the resulting Hamiltonian has a spectrum with a double degenerate ground state in the first band. In this regime, the single particle dispersion relation can host a BEC in each minima. The result is that the BEC has 2 momentum components which can interfere in real space. The interference pattern will appear as fringes in the density of the BEC. The periodicity of the fringes is a result of the Raman coupling beam wavelength modified by the coupling strength and by interactions within the BEC. Spin orbit coupling breaks the gauge symmetry of the system and the time reversal symmetry. The formation of the stripes breaks a continuous translational symmetry.
Recent efforts have attempted to observe the stripe phase in a Rubidium-87 BEC, however the stripes were too small and too low contrast to be detected.
In 2017, two research groups from ETH Zurich and from MIT reported on the first creation of a supersolid with ultracold quantum gases. The MIT group exposed a Bose-Einstein condensate in a double-well potential to light beams that created an effective spin-orbit coupling. The interference between the atoms on the two spin-orbit coupled lattice sites gave rise to a density modulation that establishes a stripe phase with supersolid properties.
References
External links
Superstripes 2008
Superstripes 2010
Superstripes web page
High-temperature superconductors
Quantum phases | Superstripes | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,627 | [
"Quantum phases",
"Phases of matter",
"Quantum mechanics",
"Condensed matter physics",
"Matter"
] |
26,434,974 | https://en.wikipedia.org/wiki/Interatomic%20Coulombic%20decay | Interatomic Coulombic decay (ICD) is a general, fundamental property of atoms and molecules that have neighbors. Interatomic (intermolecular) Coulombic decay is a very efficient interatomic (intermolecular) relaxation process of an electronically excited atom or molecule embedded in an environment. Without the environment the process cannot take place. Until now it has been mainly demonstrated for atomic and molecular clusters, independently of whether they are of van-der-Waals or hydrogen bonded type.
The nature of the process can be depicted as follows: Consider a cluster with two subunits, A and B. Suppose an inner-valence electron is removed from subunit A. If the resulting (ionized) state is higher in energy than the double ionization threshold of subunit A then an intraatomic (intramolecular) process (autoionization, in the case of core ionization Auger decay) sets in. Even though the excitation is energetically not higher than the double ionization threshold of subunit A itself, it may be higher than the double ionization threshold of the cluster which is lowered due to charge separation. If this is the case, an interatomic (intermolecular) process sets in which is called ICD. During the ICD the excess energy of subunit A is used to remove (due to electronic correlation) an outer-valence electron from subunit B. As a result, a doubly ionized cluster is formed with a single positive charge on A and B. Thus, charge separation in the final state is a fingerprint of ICD. As a consequence of the charge separation the cluster typically breaks apart via Coulomb explosion.
ICD is characterized by its decay rate or the lifetime of the excited state. The decay rate depends on the interatomic (intermolecular) distance of A and B and its dependence allows to draw conclusions on the mechanism of ICD. Particularly important is the determination of the kinetic energy spectrum of the electron emitted from subunit B which is denoted as ICD electron. ICD electrons are often measured in ICD experiments. Typically, ICD takes place on the femto second time scale, many orders of magnitude faster than those of the competing photon emission and other relaxation processes.
ICD in water
Very recently, ICD has been identified to be an additional source of low energy electrons in water and water clusters. There, ICD is faster than the competing proton transfer that is usually the prominent pathway in the case of electronic excitation of water clusters.
The response of condensed water to electronic excitations is of utmost importance for biological systems. For instance, it was shown in experiments that low energy electrons do affect constituents of DNA effectively. Furthermore, ICD was reported after core-electron excitations of hydroxide in dissolved water.
Related processes
Interatomic (Intermolecular) processes do not only occur after ionization as described above. Independent of what kind of electronic excitation is at hand, an interatomic (intermolecular) process can set in if an atom or molecule is in a state energetically higher than the ionization threshold of other atoms or molecules in the neighborhood. The following ICD related processes, which were for convenience considered below for clusters, are known:
Resonant Interatomic Coulombic Deacy (RICD) was first validated experimentally. This process emanates from an inner-valence excitation where an inner-valence electron is promoted to a virtual orbital. During the process the vacant inner-valence spot is filled up by an outer-valence electron of the same subunit or by the electron in the virtual orbital. The following action is referred to as RICD if in the previous process generated excess energy removes an outer-valence electron from another cluster constituent. The excess energy can, on the other hand, also be used to remove an outer-valence electron from the same subunit (autoionization). Consequently, RICD competes not only with slow radiative decay as ICD, it competes also with the effective autoionization. Both experimental and theoretical evidence show that this competition does not lead to a suppression of the RICD.
Auger-ICD cascade has been first predicted theoretically. States with a vacancy in a core-shell usually undergo Auger decay. This decay often produces double ionized states which can sometimes decay by another Auger decay forming a so-called Auger cascade. However, often the double ionized state is not high enough in energy to decay intraatomically once more. Under such conditions, formation of a decay cascade is impossible in the isolated species, but can occur in clusters with the next step being ICD. Meanwhile, the Auger-ICD cascade has been confirmed and studied experimentally.
Excitation–transfer–ionization (ETI) is a non-radiative decay pathway of outer-valence excitations in an environment. Assume that an outer-valence electron of a cluster subunit is promoted to a virtual orbital. On the isolated species this excitation can usually only decay slowly by photon emission. In the cluster there is an additional, much more efficient pathway if the ionization threshold of another cluster constituent is lower than the excitation energy. Then the excess energy of the excitation is transferred interatomically (intermolecularly) to remove an outer-valence electron from another cluster subunit with an ionization threshold lower than the excitation energy. Usually, this interatomic (intermolecular) process also takes place within a few femtoseconds.
Electron-transfer-mediated decay (ETMD) is a non-radiative decay pathway where a vacancy in an atom or molecule is filled by an electron from a neighboring species; a secondary electron is emitted either by the first atom/molecule or by the neighboring species. The existence of this decay mechanism has been proven experimentally in Argon dimers and in mixed Argon – Krypton clusters.
References
External links
Bibliography of ICD and related phenomena
Quantum mechanics
Atomic physics
Molecular physics
Quantum chemistry | Interatomic Coulombic decay | [
"Physics",
"Chemistry"
] | 1,261 | [
"Molecular physics",
"Quantum chemistry",
"Theoretical physics",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic physics",
"nan",
"Atomic",
" and optical physics"
] |
40,469,808 | https://en.wikipedia.org/wiki/The%20Physical%20Principles%20of%20the%20Quantum%20Theory | The Physical Principles of the Quantum Theory ( publisher: S. Hirzel Verlag, 1930) by Nobel laureate (1932) Werner Heisenberg and subsequently translated by Carl Eckart and Frank C. Hoyt. The book was first published in 1930 by University of Chicago Press. Then in 1949, according to its copyright page, Dover Publications reprinted the "unabridged and unaltered" 1930's version.
The book is collection of 1929 university lectures by Heisenberg but with more detailed mathematics. The book discusses quantum mechanics and one 1931 review states that this is a "less technical and less involved account of the theor[y]". This book has been cited more than 2,000 times.
In the book, after briefly discussing various theories, including quantum theory, Heisenberg discusses the basis for the fundamental concepts of quantum theory. Also by this time Heisenberg has stated, "the interaction between observer and object causes uncontrollable and large changes in the [atomic] system being observed...". In this work Heisenberg also discusses his uncertainty principle or uncertainty relations.
About the author
Werner Heisenberg (b. 1901 - d. 1976) was a renowned German theoretical physicist whose work pioneered and advanced quantum mechanics. He received his PhD in 1923 from Ludwig Maximilian University of Munich under Arnold Sommerfeld. He was awarded the 1932 Nobel Prize in Physics "for the creation of quantum mechanics, the application of which has led to the discovery of the allotropic forms of hydrogen".
References
External links
The Physical Principles Of The Quantum Theory. by Werner Heisenberg. Archive.org
1930 non-fiction books
1930 in science
Historical physics publications
Quantum mechanics
University of Chicago Press books | The Physical Principles of the Quantum Theory | [
"Physics"
] | 352 | [
"Quantum mechanics",
"Works about quantum mechanics"
] |
40,470,121 | https://en.wikipedia.org/wiki/Turpinite | Turpinite, also called Turpenite, is a fictional war gas allegedly developed by the French chemist Eugène Turpin and deployed against the attacking German army during the first months of World War I.
According to contemporary accounts, Turpinite, delivered by artillery shells, silently and suddenly killed any person within of impact with its fumes. Survivors of Turpinite barrages reported a strong chemical smell after an attack. In reality, this smell was a side effect of the explosives used by the French and British militaries during the war. The widespread, sudden deaths caused by artillery were in many cases caused by concussion, which leaves no mark on the victim.
After the war, German scientist Fritz Haber, who pioneered German gas attacks at the Second Battle of Ypres, said German soldiers had reported a strong chemical smell attributed to turpenite. Haber and others investigated, finding the smell was due to incomplete combustion of the picric acid used in British artillery shells. The belief that the French used chemical weapons in 1914 may have contributed to later German use of such weapons.
Bibliography
Max Hastings, Catastrophe 1914: Europe Goes to War. London, Knopf Press, Release Date 24 September 2013, , 640 pp.
References
World War I chemical weapons
Fictional weapons | Turpinite | [
"Chemistry"
] | 262 | [
"World War I chemical weapons",
"Chemical weapons"
] |
40,471,305 | https://en.wikipedia.org/wiki/MG-RAST | MG-RAST, an open-source web application server, facilitates automatic phylogenetic and functional analysis of metagenomes. It stands as one of the largest repositories for metagenomic data, employing the acronym for Metagenomic Rapid Annotations using Subsystems Technology (MG-RAST). This platform utilizes a pipeline that automatically assigns functions to metagenomic sequences, conducting sequence comparisons at both nucleotide and amino acid levels. Users benefit from phylogenetic and functional insights into the analyzed metagenomes, along with tools for comparing different datasets. MG-RAST also offers a RESTful API for programmatic access.
Argonne National Laboratory from the University of Chicago created and maintains this server. As of December 29, 2016, MG-RAST had analyzed a substantial 60 terabase-pairs of data from over 150,000 datasets. Notably, more than 23,000 of these datasets are publicly available. Computational resources are currently sourced from the DOE Magellan cloud at Argonne National Laboratory, Amazon EC2 Web services, and various traditional clusters.
Background
MG-RAST was developed to serve as a free, public resource dedicated to the analysis and storage of metagenome sequence data. It addresses a key bottleneck in metagenome analysis by eliminating the dependence on high-performance computing for annotating data.
The significance of MG-RAST becomes evident in metagenomic and metatranscriptomic studies, where the processing of large datasets often requires computationally intensive analyses. With the substantial reduction in sequencing costs in recent years, scientists can generate vast amounts of data. However, the limiting factor has shifted to computing costs. For example, a recent University of Maryland study estimated a cost exceeding $5 million per terabase using their CLOVR metagenome analysis pipeline. As sequence datasets' size and number continue to grow, the associated analysis costs are expected to rise.
Beyond analysis, MG-RAST functions as a repository tool for metagenomic data. Metadata collection and interpretation are crucial for genomic and metagenomic studies. MG-RAST addresses challenges related to the exchange, curation, and distribution of this information. The system has embraced minimal checklist standards and biome-specific environmental packages established by the Genomics Standards Consortium. Furthermore, MG-RAST provides a user-friendly uploader for capturing metadata at the time of data submission.
Pipeline for metagenomic data analysis
The MG-RAST application provides a comprehensive suite of services, including automated quality control, annotation, comparative analysis, and archiving for metagenomic and amplicon sequences. It utilizes a combination of various bioinformatics tools to achieve these functionalities. Originally designed for metagenomic data analysis, MG-RAST also extends support to amplicon sequences (16S, 18S, and ITS) and metatranscriptome (RNA-seq) sequences processing. However, it's important to note that MG-RAST currently lacks the capability to predict coding regions from eukaryotes, limiting its utility for eukaryotic metagenome analysis.
The MG-RAST pipeline can be segmented into five distinct stages:
Data hygiene
The MG-RAST pipeline incorporates a series of steps for quality control and artifacts removal, ensuring robust processing of metagenomic and metatranscriptome datasets. The initial stage involves trimming low-quality regions using SolexaQA and eliminating reads with inappropriate lengths. In the case of metagenome and metatranscriptome datasets, a dereplication step is introduced to enhance data processing efficiency.
The subsequent step employs DRISEE (Duplicate Read Inferred Sequencing Error Estimation) to evaluate sample sequencing errors by measuring Artificial Duplicate Reads (ADRs). This assessment contributes to enhancing the accuracy of downstream analyses.
Finally, the pipeline offers the option to screen reads using the Bowtie aligner. It identifies and removes reads that exhibit matches close to the genomes of model organisms, including fly, mouse, cow, and human. This step aids in refining the dataset by filtering out reads associated with potential contaminants or unintended sequences.
Feature extraction
In the gene identification process, MG-RAST employs a machine learning approach known as FragGeneScan. This method is utilized to identify gene sequences within the metagenomic or metatranscriptomic data.
For the identification of ribosomal RNA sequences, MG-RAST initiates a BLAT search against a reduced version of the SILVA database. This step allows the system to pinpoint and categorize ribosomal RNA sequences within the dataset, contributing to a more detailed understanding of the biological composition of the analyzed metagenomes or metatranscriptomes.
Feature annotation
To identify the putative functions and annotations of the genes, MG-RAST follows a multi-step process. Initially, it builds clusters of proteins at a 90% identity level using the UCLUST implementation in QIIME. The longest sequence within each cluster is then selected for further analysis.
For the similarity analysis, MG-RAST employs sBLAT, a parallelized version of the BLAT algorithm using OpenMP. The search is conducted against a protein database derived from the M5nr, which integrates nonredundant sequences from various databases such as GenBank, SEED, IMG, UniProt, KEGG, and eggNOGs.
In the case of reads associated with rRNA sequences, a clustering step is performed at a 97% identity level. The longest sequence from each cluster is chosen as the representative and is used for a BLAT search against the M5rna database. This database integrates sequences from SILVA, Greengenes, and RDP, providing a comprehensive reference for the analysis of ribosomal RNA sequences.
Profile generation
The data feeds several key products, primarily abundance profiles. These profiles summarize and reorganize the information found in the similarity files in a more easily digestible format.
Data loading
Finally, the obtained abundance profiles are loaded into the respective databases.
Detailed steps of the MG-RAST pipeline
MG-RAST utilities
G-RAST isn't just a powerhouse for metagenome analysis, it's also a treasure trove for data exploration. Dive into a diverse toolbox for visualizing and comparing metagenome profiles across various datasets. Filter based on specifics like composition, quality, functionality, or sample type to tailor your search. Delve deeper with statistical inferences and ecological analyses – all within the user-friendly web interface.
See also
Metagenomics
References
External links
MG-RAST Web Server
API
MG-RAST manual
M5NR
Molecular biology
Molecular evolution
Metagenomics | MG-RAST | [
"Chemistry",
"Biology"
] | 1,387 | [
"Biochemistry",
"Evolutionary processes",
"Molecular evolution",
"Molecular biology"
] |
40,471,625 | https://en.wikipedia.org/wiki/Classical%20diffusion | Classical diffusion is a key concept in fusion power and other fields where a plasma is confined by a magnetic field within a vessel. It considers collisions between ions in the plasma that causes the particles to move to different paths and eventually leave the confinement volume and strike the sides of the vessel.
The rate of diffusion scales with 1/B2, where B is the magnetic field strength, implies that confinement times can be greatly improved with small increases in field strength. In practice, the rates suggested by classical diffusion have not been found in real-world machines, where a host of previously unknown plasma instabilities caused the particles to leave confinement at rates closer to B, not B2, as had been seen in Bohm diffusion.
The failure of classical diffusion to predict real-world plasma behavior led to a period in the 1960s known as "the doldrums" where it appeared a practical fusion reactor would be impossible. Over time, the instabilities were found and addressed, especially in the tokamak. This has led to a deeper understanding of the diffusion process, known as neoclassical transport.
Description
Diffusion is a random walk process that can be quantified by the two key parameters: Δx, the step size, and Δt, the time interval when the walker takes a step. Thus, the diffusion coefficient is defined as D≡(Δx)2/(Δt). Plasma is a gas-like mixture of high-temperature particles, the electrons and ions that would normally be joined to form neutral atoms at lower temperatures. Temperature is a measure of the average velocity of particles, so high temperatures imply high speeds, and thus a plasma will quickly expand at rates that make it difficult to work with unless some form of "confinement" is applied.
At the temperatures involved in nuclear fusion, no material container can hold a plasma. The most common solution to this problem is to use a magnetic field to provide confinement, sometimes known as a "magnetic bottle". When a charged particle is placed in a magnetic field, it will orbit the field lines while continuing to move along that line with whatever initial velocity it had. This produces a helical path through space. The radius of the path is a function of the strength of the magnetic field. Since the axial velocities will have a range of values, often based on the Maxwell-Boltzmann statistics, this means the particles in the plasma will pass by others as they overtake them or are overtaken.
If one considers two such ions traveling along parallel axial paths, they can collide whenever their orbits intersect. In most geometries, this means there is a significant difference in the instantaneous velocities when they collide - one might be going "up" while the other would be going "down" in their helical paths. This causes the collisions to scatter the particles, making them random walks. Eventually, this process will cause any given ion to eventually leave the boundary of the field, and thereby escape "confinement".
In a uniform magnetic field, a particle undergoes random walk across the field lines by the step size of gyroradius ρ≡vth/Ω, where vth denotes the thermal velocity, and Ω≡qB/m, the gyrofrequency. The steps are randomized by the collisions to lose the coherence. Thus, the time step, or the decoherence time, is the inverse of the collisional frequency νc. The rate of diffusion is given by νcρ2, with the rather favorable B−2 scaling law.
In practice
When the topic of controlled fusion was first being studied, it was believed that the plasmas would follow the classical diffusion rate, and this suggested that useful confinement times would be relatively easy to achieve. However, in 1949 a team studying plasma arcs as a method of isotope separation found that the diffusion time was much greater than what was predicted by the classical method. David Bohm suggested it scaled with B. If this is true, Bohm diffusion would mean that useful confinement times would require impossibly large fields. Initially, Bohm diffusion was dismissed as a side-effect of the particular experimental apparatus being used and the heavy ions within it, causing turbulence within the plasma that led to faster diffusion. It seemed the larger fusion machines using much lighter atoms would not be subject to this problem.
When the first small-scale fusion machines were being built in the mid-1950s, they appeared to follow the B−2 rule, so there was great confidence that simply scaling the machines to larger sizes with more powerful magnets would meet the requirements for practical fusion. In fact, when such machines were built, like the British ZETA and U.S. Model-B stellarator were built, they demonstrated confinement times much more in line with Bohm diffusion. To examine this, the Model-B2 stellarator was run at a wide variety of field strengths and the resulting diffusion times were measured. This demonstrated a linear relationship, as predicted by Bohm. As more machines were introduced this problem continued to hold, and by the 1960s the entire field had been taken over by "the doldrums".
Further experiments demonstrated that the problem was not diffusion per se, but a host of previously unknown plasma instabilities caused by the magnetic and electric fields and the motion of the particles. As critical operating conditions were passed, these processes would start and quickly drive the plasma out of confinement. Over time, a number of new designs attacked these instabilities, and by the late 1960s there were several machines that were clearly beating the Bohm rule. Among these was the Soviet tokamak, which quickly became the focus of most research to this day.
As tokamaks took over the research field, it became clear that the original estimates based on the classical formula still did not apply exactly. This was due to the toroidal arrangement of the device; particles on the inside of the ring-shaped reactor see higher magnetic fields than on the outside, simply due to geometry, and this introduced a number of new effects. Consideration of these effects led to the modern concept of neoclassical transport.
See also
Bohm diffusion
Plasma diffusion
References
Diffusion
Plasma phenomena | Classical diffusion | [
"Physics",
"Chemistry"
] | 1,256 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Plasma physics",
"Plasma phenomena"
] |
37,720,636 | https://en.wikipedia.org/wiki/Relativistic%20Lagrangian%20mechanics | In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.
Introduction
The relativistic Lagrangian can be derived in relativistic mechanics to be of the form:
Although, unlike non-relativistic mechanics, the relativistic Lagrangian is not expressed as difference of kinetic energy with potential energy, the relativistic Hamiltonian corresponds to total energy in a similar manner but without including rest energy. The form of the Lagrangian also makes the relativistic action functional proportional to the proper time of the path in spacetime.
In covariant form, the Lagrangian is taken to be:
where σ is an affine parameter which parametrizes the spacetime curve.
Lagrangian formulation in special relativity
Lagrangian mechanics can be formulated in special relativity as follows. Consider one particle (N particles are considered later).
Coordinate formulation
If a system is described by a Lagrangian L, the Euler–Lagrange equations
retain their form in special relativity, provided the Lagrangian generates equations of motion consistent with special relativity. Here is the position vector of the particle as measured in some lab frame where Cartesian coordinates are used for simplicity, and
is the coordinate velocity, the derivative of position r with respect to coordinate time t. (Throughout this article, overdots are with respect to coordinate time, not proper time). It is possible to transform the position coordinates to generalized coordinates exactly as in non-relativistic mechanics, . Taking the total differential of r obtains the transformation of velocity v to the generalized coordinates, generalized velocities, and coordinate time
remains the same. However, the energy of a moving particle is different from non-relativistic mechanics. It is instructive to look at the total relativistic energy of a free test particle. An observer in the lab frame defines events by coordinates r and coordinate time t, and measures the particle to have coordinate velocity . By contrast, an observer moving with the particle will record a different time, this is the proper time, τ. Expanding in a power series, the first term is the particle's rest energy, plus its non-relativistic kinetic energy, followed by higher order relativistic corrections;
where c is the speed of light in vacuum. The differentials in t and τ are related by the Lorentz factor γ,
where · is the dot product. The relativistic kinetic energy for an uncharged particle of rest mass m0 is
and we may naïvely guess the relativistic Lagrangian for a particle to be this relativistic kinetic energy minus the potential energy. However, even for a free particle for which V = 0, this is wrong. Following the non-relativistic approach, we expect the derivative of this seemingly correct Lagrangian with respect to the velocity to be the relativistic momentum, which it is not.
The definition of a generalized momentum can be retained, and the advantageous connection between cyclic coordinates and conserved quantities will continue to apply. The momenta can be used to "reverse-engineer" the Lagrangian. For the case of the free massive particle, in Cartesian coordinates, the x component of relativistic momentum is
and similarly for the y and z components. Integrating this equation with respect to dx/dt gives
where X is an arbitrary function of dy/dt and dz/dt from the integration. Integrating py and pz obtains similarly
where Y and Z are arbitrary functions of their indicated variables. Since the functions X, Y, Z are arbitrary, without loss of generality we can conclude the common solution to these integrals, a possible Lagrangian that will correctly generate all the components of relativistic momentum, is
where .
Alternatively, since we wish to build a Lagrangian out of relativistically invariant quantities, take the action as proportional to the integral of the Lorentz invariant line element in spacetime, the length of the particle's world line between proper times τ1 and τ2,
where ε is a constant to be found, and after converting the proper time of the particle to the coordinate time as measured in the lab frame, the integrand is the Lagrangian by definition. The momentum must be the relativistic momentum,
which requires ε = −m0c2, in agreement with the previously obtained Lagrangian.
Either way, the position vector r is absent from the Lagrangian and therefore cyclic, so the Euler–Lagrange equations are consistent with the constancy of relativistic momentum,
which must be the case for a free particle. Also, expanding the relativistic free particle Lagrangian in a power series to first order in ,
in the non-relativistic limit when v is small, the higher order terms not shown are negligible, and the Lagrangian is the non-relativistic kinetic energy as it should be. The remaining term is the negative of the particle's rest energy, a constant term which can be ignored in the Lagrangian.
For the case of an interacting particle subject to a potential V, which may be non-conservative, it is possible for a number of interesting cases to simply subtract this potential from the free particle Lagrangian,
and the Euler–Lagrange equations lead to the relativistic version of Newton's second law. The derivative of relativistic momentum with respect to the time coordinate is equal to the force acting on the particle:
assuming the potential V can generate the corresponding force F in this way. If the potential cannot obtain the force as shown, then the Lagrangian would need modification to obtain the correct equations of motion.
Although this has been shown by taking Cartesian coordinates, it follows due to invariance of Euler Lagrange equations, that it is also satisfied in any arbitrary co-ordinate system as it physically corresponds to action minimization being independent of the co-ordinate system used to describe it. In a similar manner, several properties in Lagrangian mechanics are preserved whenever they are also independent of the specific form of the Lagrangian or the laws of motion governing the particles. For example, it is also true that if the Lagrangian is explicitly independent of time and the potential V(r) independent of velocities, then the total relativistic energy
is conserved, although the identification is less obvious since the first term is the relativistic energy of the particle which includes the rest mass of the particle, not merely the relativistic kinetic energy. Also, the argument for homogeneous functions does not apply to relativistic Lagrangians.
The extension to N particles is straightforward, the relativistic Lagrangian is just a sum of the "free particle" terms, minus the potential energy of their interaction;
where all the positions and velocities are measured in the same lab frame, including the time.
The advantage of this coordinate formulation is that it can be applied to a variety of systems, including multiparticle systems. The disadvantage is that some lab frame has been singled out as a preferred frame, and none of the equations are manifestly covariant (in other words, they do not take the same form in all frames of reference). For an observer moving relative to the lab frame, everything must be recalculated; the position r, the momentum p, total energy E, potential energy, etc. In particular, if this other observer moves with constant relative velocity then Lorentz transformations must be used. However, the action will remain the same since it is Lorentz invariant by construction.
A seemingly different but completely equivalent form of the Lagrangian for a free massive particle, which will readily extend to general relativity as shown below, can be obtained by inserting
into the Lorentz invariant action so that
where is retained for simplicity. Although the line element and action are Lorentz invariant, the Lagrangian is not, because it has explicit dependence on the lab coordinate time. Still, the equations of motion follow from Hamilton's principle
Since the action is proportional to the length of the particle's worldline (in other words its trajectory in spacetime), this route illustrates that finding the stationary action is asking to find the trajectory of shortest or largest length in spacetime. Correspondingly, the equations of motion of the particle are akin to the equations describing the trajectories of shortest or largest length in spacetime, geodesics.
For the case of an interacting particle in a potential V, the Lagrangian is still
which can also extend to many particles as shown above, each particle has its own set of position coordinates to define its position.
Covariant formulation
In the covariant formulation, time is placed on equal footing with space, so the coordinate time as measured in some frame is part of the configuration space alongside the spatial coordinates (and other generalized coordinates). For a particle, either massless or massive, the Lorentz invariant action is (abusing notation)
where lower and upper indices are used according to covariance and contravariance of vectors, σ is an affine parameter, and is the four-velocity of the particle.
For massive particles, σ can be the arc length s, or proper time τ, along the particle's world line,
For massless particles, it cannot because the proper time of a massless particle is always zero;
For a free particle, the Lagrangian has the form
where the irrelevant factor of 1/2 is allowed to be scaled away by the scaling property of Lagrangians. No inclusion of mass is necessary since this also applies to massless particles. The Euler–Lagrange equations in the spacetime coordinates are
which is the geodesic equation for affinely parameterized geodesics in spacetime. In other words, the free particle follows geodesics. Geodesics for massless particles are called "null geodesics", since they lie in a "light cone" or "null cone" of spacetime (the null comes about because their inner product via the metric is equal to 0), massive particles follow "timelike geodesics", and hypothetical particles that travel faster than light known as tachyons follow "spacelike geodesics".
This manifestly covariant formulation does not extend to an N-particle system, since then the affine parameter of any one particle cannot be defined as a common parameter for all the other particles.
Examples in special relativity
Special relativistic 1d free particle
For a 1d relativistic free particle, the Lagrangian is
This results in the following equation of motion:
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation
|-
|
|}
Special relativistic 1d harmonic oscillator
For a 1d relativistic simple harmonic oscillator, the Lagrangian is
where k is the spring constant.
Special relativistic constant force
For a particle under a constant force, the Lagrangian is
where g is the force per unit mass.
This results in the following equation of motion:
Which, given initial conditions of
results in the position of the particle as a function of time being
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation of equation of motion
|-
|
|}
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation of solution
|-
|
From Euler-Lagrange equation we have
Integrating with respect to time:
Where is an undetermined constant.
Solving this equation for :
Then, using ,
This implies that
Thus
Note that for a large value of , we have and see that .
Then, given that
we have
Picking , we have
Then note that for some undetermined constant so that
Using :
Recalling that :
Since , we have and come to
Therefore
Plugging in the definition of and using brings the solution to
The Newtonian limit of this solution can be obtained by making the following approximations, which are equivalent to stating that :
This simplifies the solution to
Then using the approximation that :
Which simplifies to
This is expected solution to the equation of motion to the Newtonian particle subject to a constant force:
|}
Special relativistic test particle in an electromagnetic field
In special relativity, the Lagrangian of a massive charged test particle in an electromagnetic field modifies to
The Lagrangian equations in r lead to the Lorentz force law, in terms of the relativistic momentum
In the language of four-vectors and tensor index notation, the Lagrangian takes the form
where uμ = dxμ/dτ is the four-velocity of the test particle, and Aμ the electromagnetic four-potential.
The Euler–Lagrange equations are (notice the total derivative with respect to proper time instead of coordinate time)
obtains
Under the total derivative with respect to proper time, the first term is the relativistic momentum, the second term is
then rearranging, and using the definition of the antisymmetric electromagnetic tensor, gives the covariant form of the Lorentz force law in the more familiar form,
Lagrangian formulation in general relativity
The Lagrangian is that of a single particle plus an interaction term LI
Varying this with respect to the position of the particle xα as a function of time t gives
This gives the equation of motion
where
is the non-gravitational force on the particle. (For m to be independent of time, we must have .)
Rearranging gets the force equation
where Γ is the Christoffel symbol, which describes the gravitational field.
If we let
be the (kinetic) linear momentum for a particle with mass, then
and
hold even for a massless particle.
Examples in general relativity
General relativistic test particle in an electromagnetic field
In general relativity, the first term generalizes (includes) both the classical kinetic energy and the interaction with the gravitational field. For a charged particle in an electromagnetic field, the Lagrangian is given by
If the four spacetime coordinates xμ are given in arbitrary units (i.e. unitless), then gμν is the rank 2 symmetric metric tensor, which is also the gravitational potential. Also, Aμ is the electromagnetic 4-vector potential.
There exists an equivalent formulation of the relativistic Lagrangian, which has two advantages:
it allows for a generalization to massless particles and tachyons;
it is based on an energy functional instead of a length functional, such that it does not contain a square root.
In this alternative formulation, the Lagrangian is given by
,
where is an arbitrary affine parameter and is an auxiliary parameter that can be viewed as an einbein field along the worldline. In the original Lagrangian with the square root the energy-momentum relation appears as a primary constraint that is also a first class constraint. In this reformulation this is no longer the case. Instead, the energy-momentum relation appears as the equation of motion for the auxiliary field . Therefore, the constraint is now a secondary constraint that is still a first class constraint, reflecting the invariance of the action under reparameterization of the affine parameter . After the equation of motion has been derived, one must gauge fix the auxiliary field . The standard gauge choice is as follows:
If , one fixes . This choice automatically fixes , i.e. the affine parameter is fixed to be the proper time.
If , one fixes . This choice automatically fixes , i.e. the affine parameter is fixed to be the proper length.
If , there is no choice that fixes the affine parameter to a physical parameter. Consequently, there is some freedom in fixing the auxiliary field. The two common choices are:
Fix . In this case, does not carry a dependence on the affine parameter , but the affine parameter is measured in units of time per unit of mass, i.e. .
Fix , where is the energy of the particle. In this case, the affine parameter is measured in units of time, i.e. , but retains a dependence on the affine parameter .
See also
Relativistic mechanics
Fundamental lemma of the calculus of variations
Canonical coordinates
Functional derivative
Generalized coordinates
Hamiltonian mechanics
Hamiltonian optics
Lagrangian analysis (applications of Lagrangian mechanics)
Lagrangian point
Lagrangian system
Non-autonomous mechanics
Restricted three-body problem
Plateau's problem
Footnotes
Citations
References
Dynamical systems
Lagrangian mechanics
General relativity | Relativistic Lagrangian mechanics | [
"Physics",
"Mathematics"
] | 3,534 | [
"Lagrangian mechanics",
"Classical mechanics",
"General relativity",
"Special relativity",
"Mechanics",
"Theory of relativity",
"Dynamical systems"
] |
29,696,524 | https://en.wikipedia.org/wiki/Propositional%20proof%20system | In propositional calculus and proof complexity a propositional proof system (pps), also called a Cook–Reckhow propositional proof system, is a system for proving classical propositional tautologies.
Mathematical definition
Formally a pps is a polynomial-time function P whose range is the set of all propositional tautologies (denoted TAUT). If A is a formula, then any x such that P(x) = A is called a P-proof of A. The condition defining pps can be broken up as follows:
Completeness: every propositional tautology has a P-proof,
Soundness: if a propositional formula has a P-proof then it is a tautology,
Efficiency: P runs in polynomial time.
In general, a proof system for a language L is a polynomial-time function whose range is L. Thus, a propositional proof system is a proof system for TAUT.
Sometimes the following alternative definition is considered: a pps is given as a proof-verification algorithm P(A,x) with two inputs. If P accepts the pair (A,x) we say that x is a P-proof of A. P is required to run in polynomial time, and moreover, it must hold that A has a P-proof if and only if it is a tautology.
If P1 is a pps according to the first definition, then P2 defined by P2(A,x) if and only if P1(x) = A is a pps according to the second definition. Conversely, if P2 is a pps according to the second definition, then P1 defined by
(P1 takes pairs as input) is a pps according to the first definition, where is a fixed tautology.
Algorithmic interpretation
One can view the second definition as a non-deterministic algorithm for solving membership in TAUT. This means that proving a superpolynomial proof size lower-bound for pps would rule out existence of a certain class of polynomial-time algorithms based on that pps.
As an example, exponential proof size lower-bounds in resolution for the pigeon hole principle imply that any algorithm based on resolution cannot decide TAUT or SAT efficiently and will fail on pigeon hole principle tautologies. This is significant because the class of algorithms based on resolution includes most of current propositional proof search algorithms and modern industrial SAT solvers.
History
Historically, Frege's propositional calculus was the first propositional proof system. The general definition of a propositional proof system is due to Stephen Cook and Robert A. Reckhow (1979).
Relation with computational complexity theory
Propositional proof system can be compared using the notion of p-simulation. A propositional proof system P p-simulates Q (written as P ≤pQ) when there is a polynomial-time function F such that P(F(x)) = Q(x) for every x. That is, given a Q-proof x, we can find in polynomial time a P-proof of the same tautology. If P ≤pQ and Q ≤pP, the proof systems P and Q are p-equivalent. There is also a weaker notion of simulation: a pps P simulates or weakly p-simulates a pps Q if there is a polynomial p such that for every Q-proof x of a tautology A, there is a P-proof y of A such that the length of y, |y| is at most p(|x|). (Some authors use the words p-simulation and simulation interchangeably for either of these two concepts, usually the latter.)
A propositional proof system is called p-optimal if it p-simulates all other propositional proof systems, and it is optimal if it simulates all other pps. A propositional proof system P is polynomially bounded (also called super) if every tautology has a short (i.e., polynomial-size) P-proof.
If P is polynomially bounded and Q simulates P, then Q is also polynomially bounded.
The set of propositional tautologies, TAUT, is a coNP-complete set. A propositional proof system is a certificate-verifier for membership in TAUT. Existence of a polynomially bounded propositional proof system means that there is a verifier with polynomial-size certificates, i.e., TAUT is in NP. In fact these two statements are equivalent, i.e., there is a polynomially bounded propositional proof system if and only if the complexity classes NP and coNP are equal.
Some equivalence classes of proof systems under simulation or p-simulation are closely related to theories of bounded arithmetic; they are essentially "non-uniform" versions of the bounded arithmetic, in the same way that circuit classes are non-uniform versions of resource-based complexity classes. "Extended Frege" systems (allowing the introduction of new variables by definition) correspond in this way to polynomially-bounded systems, for example. Where the bounded arithmetic in turn corresponds to a circuit-based complexity class, there are often similarities between the theory of proof systems and the theory of the circuit families, such as matching lower bound results and separations. For example, just as counting cannot be done by an circuit family of subexponential size, many tautologies relating to the pigeonhole principle cannot have subexponential proofs in a proof system based on bounded-depth formulas (and in particular, not by resolution-based systems, since they rely solely on depth 1 formulas).
Examples of propositional proof systems
Some examples of propositional proof systems studied are:
Propositional Resolution and various restrictions and extensions of it like DPLL algorithm
Natural deduction
Sequent calculus
Frege system
Extended Frege system
Polynomial calculus
Nullstellensatz system
Cutting-plane method
Semantic tableau
References
Further reading
Samuel Buss (1998), "An introduction to proof theory", in: Handbook of Proof Theory (ed. S.R.Buss), Elsevier (1998).
P. Pudlák (1998), "The lengths of proofs", in: Handbook of Proof Theory (ed. S.R.Buss), Elsevier, (1998).
P. Beame and T. Pitassi (1998). Propositional proof complexity: past, present and future. Technical Report TR98-067, Electronic Colloquium on Computational Complexity.
Nathan Segerlind (2007) "The Complexity of Propositional Proofs", Bulletin of Symbolic Logic 13(4): 417–481
J. Krajíček (1995), Bounded Arithmetic, Propositional Logic, and Complexity Theory, Cambridge University Press.
J. Krajíček, Proof complexity, in: Proc. 4th European Congress of Mathematics (ed. A. Laptev), EMS, Zurich, pp. 221–231, (2005).
Alexander A. Razborov, Propositional proof complexity, in: Proc. 8th European Congress of Mathematics, EMS, Portorož, pp. 439–464, (2023).
J. Krajíček, Propositional proof complexity I. and Proof complexity and arithmetic.
Stephen Cook and Phuong Nguyen, Logical Foundations of Proof Complexity, Cambridge University Press, 2010 (draft from 2008)
Robert Reckhow, On the Lengths of Proofs in the Propositional Calculus, PhD Thesis, 1975.
External links
Proof Complexity
Computational complexity theory
Logic in computer science
Automated theorem proving
Propositional calculus
Systems of formal logic | Propositional proof system | [
"Mathematics"
] | 1,551 | [
"Logic in computer science",
"Mathematical logic",
"Computational mathematics",
"Automated theorem proving"
] |
29,708,565 | https://en.wikipedia.org/wiki/Atomicity%20%28chemistry%29 | Atomicity is the total number of atoms present in a molecule of an element. For example, each molecule of oxygen (O2) is composed of two oxygen atoms. Therefore, the atomicity of oxygen is 2.
In older contexts, atomicity is sometimes equivalent to valency. Some authors also use the term to refer to the maximum number of valencies observed for an element.
Classifications
Based on atomicity, molecules can be classified as:
Monoatomic (composed of one atom). Examples include He (helium), Ne (neon), Ar (argon), and Kr (krypton). All noble gases are monoatomic.
Diatomic (composed of two atoms). Examples include H2 (hydrogen), N2 (nitrogen), O2 (oxygen), F2 (fluorine), and Cl2 (chlorine). Halogens are usually diatomic.
Triatomic (composed of three atoms). Examples include O3 (ozone).
Tetratomic (composed of four atoms),Heptatomic(consisting 5 atoms), Hexatomic(consiting6 atoms), Septatomic(consisting 7 atoms), Octatomic(containing 8 atoms)
Atomicity may vary in different allotropes of the same element.
The exact atomicity of metals, as well as some other elements such as carbon, cannot be determined because they consist of a large and indefinite number of atoms bonded together. They are typically designated as having an atomicity of 2.
The atomicity of homonuclear molecule can be derived by dividing the molecular weight by the atomic weight. For example, the molecular weight of oxygen is 31.999, while its atomic weight is 15.879; therefore, its atomicity is approximately 2 (31.999/15.879 ≈ 2).
Examples
There is no general variation in atomicity. It depends on the type of bonding the atom makes with itself to form the molecule of that particular element. The most common values of atomicity for the first 30 elements in the periodic table are as follows:
References
Molecules
Stoichiometry
Physical chemistry
Inorganic chemistry | Atomicity (chemistry) | [
"Physics",
"Chemistry"
] | 448 | [
"Chemical reaction engineering",
"Stoichiometry",
"Applied and interdisciplinary physics",
"Molecular physics",
"Molecules",
"Physical objects",
"nan",
"Physical chemistry",
"Atoms",
"Matter"
] |
23,534,147 | https://en.wikipedia.org/wiki/PSI%20Protein%20Classifier | PSI Protein Classifier is a program generalizing the results of both successive and independent iterations of the PSI-BLAST program. PSI Protein Classifier determines belonging of the found by PSI-BLAST proteins to the known families. The unclassified proteins are grouped according to similarity. PSI Protein Classifier allows to measure evolutionary distances between families of homologous proteins by the number of PSI-BLAST iterations.
Sources
D.G. Naumoff and M. Carreras. PSI Protein Classifier: a new program automating PSI-BLAST search results. Molecular Biology (Engl Transl), 2009, 43(4):652-664. PDF
External links
PSI Protein Classifier
Bioinformatics algorithms
Phylogenetics software
Laboratory software | PSI Protein Classifier | [
"Chemistry",
"Biology"
] | 153 | [
"Bioinformatics stubs",
"Bioinformatics algorithms",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics"
] |
23,534,720 | https://en.wikipedia.org/wiki/International%20Academy%20of%20Mathematical%20Chemistry | The International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik, Croatia, in 2005 by Milan Randić. It is an organization for chemistry and mathematics avocation; its predecessors have been around since the 1930s. There are 88 Academy members () from around the world (27 countries), comprising six scientists awarded the Nobel Prize.
Governing body of the IAMC
2005–2007:
President: Alexandru Balaban
Vice-president: Milan Randić
Secretary: Ante Graovac
Treasurer: Dejan Plavšić
2008–2011:
President: Roberto Todeschini
Vice-president: Tomaž Pisanski
Secretary: Ante Graovac
Treasurer: Dražen Vikić-Topić
Member: Ivan Gutman
Member: Nikolai Zefirov
since 2011:
President: Roberto Todeschini
Vice-president: Edward C. Kirby
Vice-president: Sandi Klavžar
Secretary: Ante Graovac
Treasurer: Dražen Vikić-Topić
Member: Ivan Gutman
Member: Nikolai Zefirov
since 2019:
President:
Vice-president: Douglas J. Klein
Vice-president: Xueliang Li
Vice-president: Sandi Klavžar
Vice-president: Tomaž Pisanski
Secretary: Boris Furtula
Treasurer:
Member: Ivan Gutman
IAMC yearly meetings
2005 – Dubrovik, Croatia
2006 – Dubrovik, Croatia
2007 – Dubrovik, Croatia
2008 – Verbania, Italy
2009 – Dubrovik, Croatia
2010 – Dubrovik, Croatia
2011 – Bled, Slovenia
2012 – Verona, Italy
2014 – Split, Croatia
2015 – Kranjska Gora, Slovenia
2016 – Tianjin, China
2017 – Cluj, Romania
2019 – Bled, Slovenia
2023 – Kranjska Gora, Slovenia
See also
Mathematical chemistry
References
Mathematical chemistry
International academies
Scientific organizations established in 2005
2005 establishments in Croatia | International Academy of Mathematical Chemistry | [
"Chemistry",
"Mathematics"
] | 375 | [
"Drug discovery",
"Applied mathematics",
"Molecular modelling",
"Mathematical chemistry",
"Theoretical chemistry",
"Chemistry organization stubs"
] |
23,535,218 | https://en.wikipedia.org/wiki/Industrial%20engineering | Industrial engineering is an engineering profession that is concerned with the optimization of complex processes, systems, or organizations by developing, improving and implementing integrated systems of people, money, knowledge, information and equipment. Industrial engineering is central to manufacturing operations.
Industrial engineers use specialized knowledge and skills in the mathematical, physical, and social sciences, together with engineering analysis and design principles and methods, to specify, predict, and evaluate the results obtained from systems and processes. Several industrial engineering principles are followed in the manufacturing industry to ensure the effective flow of systems, processes, and operations. These include:
Lean Manufacturing
Six Sigma
Information Systems
Process Capability
Define, Measure, Analyze, Improve and Control (DMAIC).
These principles allow the creation of new systems, processes or situations for the useful coordination of labor, materials and machines and also improve the quality and productivity of systems, physical or social. Depending on the subspecialties involved, industrial engineering may also overlap with, operations research, systems engineering, manufacturing engineering, production engineering, supply chain engineering, management science, engineering management, financial engineering, ergonomics or human factors engineering, safety engineering, logistics engineering, quality engineering or other related capabilities or fields.
History
Origins
Industrial engineering
There is a general consensus among historians that the roots of the industrial engineering profession date back to the Industrial Revolution. The technologies that helped mechanize traditional manual operations in the textile industry including the flying shuttle, the spinning jenny, and perhaps most importantly the steam engine generated economies of scale that made mass production in centralized locations attractive for the first time. The concept of the production system had its genesis in the factories created by these innovations. It has also been suggested that perhaps Leonardo da Vinci was the first industrial engineer because there is evidence that he applied science to the analysis of human work by examining the rate at which a man could shovel dirt around the year 1500. Others also state that the industrial engineering profession grew from Charles Babbage’s study of factory operations and specifically his work on the manufacture of straight pins in 1832 . However, it has been generally argued that these early efforts, while valuable, were merely observational and did not attempt to engineer the jobs studied or increase overall output.
Specialization of labour
Adam Smith's concepts of Division of Labour and the "Invisible Hand" of capitalism introduced in his treatise The Wealth of Nations motivated many of the technological innovators of the Industrial Revolution to establish and implement factory systems. The efforts of James Watt and Matthew Boulton led to the first integrated machine manufacturing facility in the world, including the application of concepts such as cost control systems to reduce waste and increase productivity and the institution of skills training for craftsmen.
Charles Babbage became associated with industrial engineering because of the concepts he introduced in his book On the Economy of Machinery and Manufacturers which he wrote as a result of his visits to factories in England and the United States in the early 1800s. The book includes subjects such as the time required to perform a specific task, the effects of subdividing tasks into smaller and less detailed elements, and the advantages to be gained from repetitive tasks.
Interchangeable parts
Eli Whitney and Simeon North proved the feasibility of the notion of interchangeable parts in the manufacture of muskets and pistols for the US Government. Under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, which eventually led to the industrial environment to be studied later.
Pioneers
Frederick Taylor (1856–1915) is generally credited as being the father of the industrial engineering discipline. He earned a degree in mechanical engineering from Stevens Institute of Technology and earned several patents from his inventions. His books, Shop Management and The Principles of Scientific Management, which were published in the early 1900s, were the beginning of industrial engineering. Improvements in work efficiency under his methods was based on improving work methods, developing of work standards, and reduction in time required to carry out the work. With an abiding faith in the scientific method, Taylor did many experiments in machine shop work on machines as well as men. Taylor developed "time study" to measure time taken for various elements of a task and then used the study observations to reduce the time further. Time study was done for the improved method once again to provide time standards which are accurate for planning manual tasks and also for providing incentives.
The husband-and-wife team of Frank Gilbreth (1868–1924) and Lillian Gilbreth (1878–1972) was the other cornerstone of the industrial engineering movement whose work is housed at Purdue University School of Industrial Engineering. They categorized the elements of human motion into 18 basic elements called therbligs. This development permitted analysts to design jobs without knowledge of the time required to do a job. These developments were the beginning of a much broader field known as human factors or ergonomics.
In 1908, the first course on industrial engineering was offered as an elective at Pennsylvania State University, which became a separate program in 1909 through the efforts of Hugo Diemer. The first doctoral degree in industrial engineering was awarded in 1933 by Cornell University.
In 1912, Henry Laurence Gantt developed the Gantt chart, which outlines actions the organization along with their relationships. This chart opens later form familiar to us today by Wallace Clark.
With the development of assembly lines, the factory of Henry Ford (1913) accounted for a significant leap forward in the field. Ford reduced the assembly time of a car from more than 700 hours to 1.5 hours. In addition, he was a pioneer of the economy of the capitalist welfare ("welfare capitalism") and the flag of providing financial incentives for employees to increase productivity.
In 1927, the then Technische Hochschule Berlin was the first German university to introduce the degree. The course of studies developed by Willi Prion was then still called Business and Technology and was intended to provide descendants of industrialists with an adequate education.
Comprehensive quality management system (Total quality management or TQM) developed in the forties was gaining momentum after World War II and was part of the recovery of Japan after the war.
The American Institute of Industrial Engineering was formed in 1948. The early work by F. W. Taylor and the Gilbreths was documented in papers presented to the American Society of Mechanical Engineers as interest grew from merely improving machine performance to the performance of the overall manufacturing process, most notably starting with the presentation by Henry R. Towne (1844–1924) of his paper The Engineer as An Economist (1886).
Modern practice
From 1960 to 1975, with the development of decision support systems in supply such as material requirements planning (MRP), one can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide.
In the 1970s, with the penetration of Japanese management theories such as Kaizen and Kanban, Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own continuous improvement programs. W. Edwards Deming made significant contributions in the minimization of variance starting in the 1950s and continuing to the end of his life.
In the 1990s, following the global industry globalization process, the emphasis was on supply chain management and customer-oriented business process design. The theory of constraints, developed by Israeli scientist Eliyahu M. Goldratt (1985), is also a significant milestone in the field.
Comparison to other engineering disciplines
Engineering is traditionally decompositional. To understand the whole of something, it is first broken down into its parts. One masters the parts, then puts them back together to create a better understanding of how to master the whole. The approach of industrial and systems engineering (ISE) is opposite; any one part cannot be understood without the context of the whole system. Changes in one part of the system affect the entire system, and the role of a single part is to better serve the whole system.
Also, industrial engineering considers the human factor and its relation to the technical aspect of the situation and all of the other factors that influence the entire situation, while other engineering disciplines focus on the design of inanimate objects.
"Industrial Engineers integrate combinations of people, information, materials, and equipment that produce innovative and efficient organizations. In addition to manufacturing, Industrial Engineers work and consult in every industry, including hospitals, communications, e-commerce, entertainment, government, finance, food, pharmaceuticals, semiconductors, sports, insurance, sales, accounting, banking, travel, and transportation."
"Industrial Engineering is the branch of Engineering most closely related to human resources in that we apply social skills to work with all types of employees, from engineers to salespeople to top management. One of the main focuses of an Industrial Engineer is to improve the working environments of people – not to change the worker, but to change the workplace."
"All engineers, including Industrial Engineers, take mathematics through calculus and differential equations. Industrial Engineering is different in that it is based on discrete variable math, whereas all other engineering is based on continuous variable math. We emphasize the use of linear algebra and difference equations, as opposed to the use of differential equations which are so prevalent in other engineering disciplines. This emphasis becomes evident in optimization of production systems in which we are sequencing orders, scheduling batches, determining the number of materials handling units, arranging factory layouts, finding sequences of motions, etc. As, Industrial Engineers, we deal almost exclusively with systems of discrete components."
Etymology
While originally applied to manufacturing, the use of industrial in industrial engineering can be somewhat misleading, since it has grown to encompass any methodical or quantitative approach to optimizing how a process, system, or organization operates. In fact, the industrial in industrial engineering means the industry in its broadest sense. People have changed the term industrial to broader terms such as industrial and manufacturing engineering, industrial and systems engineering, industrial engineering and operations research, industrial engineering and management.
Sub-disciplines
Industrial engineering has many sub-disciplines, the most common of which are listed below. Although there are industrial engineers who focus exclusively on one of these sub-disciplines, many deals with a combination of them such as supply chain and logistics, and facilities and energy management.
Methods engineering
Facilities engineering & energy management
Financial engineering
Energy engineering
Human factors & safety engineering
Information systems engineering & management
Manufacturing engineering
Operations engineering & managementOperations research & optimization
Policy planning
Production engineeringQuality & reliability engineering
Supply chain management & logistics
Systems engineering & analysis
Systems simulation
Related disciplines
Organization development & change management
Behavioral economics
Education
Industrial engineers study the interaction of human beings with machines, materials, information, procedures and environments in such developments and in designing a technological system.
Industrial engineering degrees accredited within any member country of the Washington Accord enjoy equal accreditation within all other signatory countries, thus allowing engineers from one country to practice engineering professionally in any other.
Universities offer degrees at the bachelor, masters, and doctoral level.
Undergraduate curriculum
In the United States, the undergraduate degree earned is either a bachelor of science (B.S.) or a bachelor of science and engineering (B.S.E.) in industrial engineering (IE). In South Africa, the undergraduate degree is a bachelor of engineering (BEng). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE or ISyE).
The typical curriculum includes a broad math and science foundation spanning chemistry, physics, mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design, and the standard range of engineering mathematics (i.e., calculus, linear algebra, differential equations, statistics). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work – which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization, applied probability, stochastic modeling, design of experiments, statistical process control, simulation, manufacturing engineering, ergonomics/safety engineering, and engineering economics. Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing, supply chains and logistics, analytics and machine learning, production systems, human factors and industrial design, and service systems.
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
Graduate curriculum
The usual graduate degree earned is the master of science (MS), master of science and engineering (MSE) or master of engineering (MEng) in industrial engineering or various alternative related concentration titles.
Typical MS curricula may cover:
Manufacturing Engineering
Analytics and machine learning
Computer-aided manufacturing
Engineering economics
Financial engineering
Human factors engineering and ergonomics (safety engineering)
Lean Six Sigma
Management sciences
Materials management
Operations management
Operations research and optimization techniques
Predetermined motion time system and computer use for IE
Product development
Production planning and control
Productivity improvement
Project management
Reliability engineering and life testing
Robotics
Statistical process control or quality control
Supply chain management and logistics
System dynamics and policy planning
Systems simulation and stochastic processes
Time and motion study
Facilities design and work-space design
Quality engineering
System analysis and techniques
Differences in teaching
While industrial engineering as a formal degree has been around for years, consensus on what topics should be taught and studied differs across countries. For example, Turkey focuses on a very technical degree while Denmark, Finland and the United Kingdom have a management focus degree, thus making it less technical. The United States, meanwhile, focuses on case studies, group problem solving and maintains a balance between the technical and non-technical side.
Practicing engineers
Traditionally, a major aspect of industrial engineering was planning the layouts of factories and designing assembly lines and other manufacturing paradigms. And now, in lean manufacturing systems, industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.
Examples of where industrial engineering might be used include flow process charting, process mapping, designing an assembly workstation, strategizing for various operational logistics, consulting as an efficiency expert, developing a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply-chain management), and shortening lines (or queues) at a bank, hospital, or a theme park.
Modern industrial engineers typically use predetermined motion time systems, computer simulation (especially discrete event simulation), along with extensive mathematical tools for modeling, such as mathematical optimization and queueing theory, and computational methods for system analysis, evaluation, and optimization. Industrial engineers also use the tools of data science and machine learning in their work owing to the strong relatedness of these disciplines with the field and the similar technical background required of industrial engineers (including a strong foundation in probability theory, linear algebra, and statistics, as well as having coding skills).
See also
International Conference on Mechanical Industrial & Energy Engineering
Related topics
Associations
Washington Accord
Notes
Further reading
Badiru, A. (Ed.) (2005). Handbook of industrial and systems engineering. CRC Press. .
B. S. Blanchard and Fabrycky, W. (2005). Systems Engineering and Analysis (4th Edition). Prentice-Hall. .
Salvendy, G. (Ed.) (2001). Handbook of industrial engineering: Technology and operations management. Wiley-Interscience. .
Turner, W. et al. (1992). Introduction to industrial and systems engineering (Third edition). Prentice Hall. .
Eliyahu M. Goldratt, Jeff Cox (1984). The Goal North River Press; 2nd Rev edition (1992). ; 20th Anniversary edition (2004)
Miller, Doug, Towards Sustainable Labour Costing in UK Fashion Retail (February 5, 2013).
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
Systems Engineering Body of Knowledge (SEBoK)
Traditional Engineering
Master of Engineering Administration (MEA)
Kambhampati, Venkata Satya Surya Narayana Rao (2017). "Principles of Industrial Engineering" IIE Annual Conference. Proceedings; Norcross (2017): 890-895.
External links | Industrial engineering | [
"Engineering"
] | 3,370 | [
"Industrial engineering"
] |
23,535,337 | https://en.wikipedia.org/wiki/Coherent%20sheaf%20cohomology | In mathematics, especially in algebraic geometry and the theory of complex manifolds, coherent sheaf cohomology is a technique for producing functions with specified properties. Many geometric questions can be formulated as questions about the existence of sections of line bundles or of more general coherent sheaves; such sections can be viewed as generalized functions. Cohomology provides computable tools for producing sections, or explaining why they do not exist. It also provides invariants to distinguish one algebraic variety from another.
Much of algebraic geometry and complex analytic geometry is formulated in terms of coherent sheaves and their cohomology.
Coherent sheaves
Coherent sheaves can be seen as a generalization of vector bundles. There is a notion of a coherent analytic sheaf on a complex analytic space, and an analogous notion of a coherent algebraic sheaf on a scheme. In both cases, the given space comes with a sheaf of rings , the sheaf of holomorphic functions or regular functions, and coherent sheaves are defined as a full subcategory of the category of -modules (that is, sheaves of -modules).
Vector bundles such as the tangent bundle play a fundamental role in geometry. More generally, for a closed subvariety of with inclusion , a vector bundle on determines a coherent sheaf on , the direct image sheaf , which is zero outside . In this way, many questions about subvarieties of can be expressed in terms of coherent sheaves on .
Unlike vector bundles, coherent sheaves (in the analytic or algebraic case) form an abelian category, and so they are closed under operations such as taking kernels, images, and cokernels. On a scheme, the quasi-coherent sheaves are a generalization of coherent sheaves, including the locally free sheaves of infinite rank.
Sheaf cohomology
For a sheaf of abelian groups on a topological space , the sheaf cohomology groups for integers are defined as the right derived functors of the functor of global sections, . As a result, is zero for , and can be identified with . For any short exact sequence of sheaves , there is a long exact sequence of cohomology groups:
If is a sheaf of -modules on a scheme , then the cohomology groups (defined using the underlying topological space of ) are modules over the ring of regular functions. For example, if is a scheme over a field , then the cohomology groups are -vector spaces. The theory becomes powerful when is a coherent or quasi-coherent sheaf, because of the following sequence of results.
Vanishing theorems in the affine case
Complex analysis was revolutionized by Cartan's theorems A and B in 1953. These results say that if is a coherent analytic sheaf on a Stein space , then is spanned by its global sections, and for all . (A complex space is Stein if and only if it is isomorphic to a closed analytic subspace of for some .) These results generalize a large body of older work about the construction of complex analytic functions with given singularities or other properties.
In 1955, Serre introduced coherent sheaves into algebraic geometry (at first over an algebraically closed field, but that restriction was removed by Grothendieck). The analogs of Cartan's theorems hold in great generality: if is a quasi-coherent sheaf on an affine scheme , then is spanned by its global sections, and for . This is related to the fact that the category of quasi-coherent sheaves on an affine scheme is equivalent to the category of -modules, with the equivalence taking a sheaf to the -module . In fact, affine schemes are characterized among all quasi-compact schemes by the vanishing of higher cohomology for quasi-coherent sheaves.
Čech cohomology and the cohomology of projective space
As a consequence of the vanishing of cohomology for affine schemes: for a separated scheme , an affine open covering of , and a quasi-coherent sheaf on , the cohomology groups are isomorphic to the Čech cohomology groups with respect to the open covering . In other words, knowing the sections of on all finite intersections of the affine open subschemes determines the cohomology of with coefficients in .
Using Čech cohomology, one can compute the cohomology of projective space with coefficients in any line bundle. Namely, for a field , a positive integer , and any integer , the cohomology of projective space over with coefficients in the line bundle is given by:
In particular, this calculation shows that the cohomology of projective space over with coefficients in any line bundle has finite dimension as a -vector space.
The vanishing of these cohomology groups above dimension is a very special case of Grothendieck's vanishing theorem: for any sheaf of abelian groups on a Noetherian topological space of dimension , for all . This is especially useful for a Noetherian scheme (for example, a variety over a field) and a quasi-coherent sheaf.
Sheaf cohomology of plane-curves
Given a smooth projective plane curve of degree , the sheaf cohomology can be readily computed using a long exact sequence in cohomology. First note that for the embedding there is the isomorphism of cohomology groups
since is exact. This means that the short exact sequence of coherent sheaves
on , called the ideal sequence, can be used to compute cohomology via the long exact sequence in cohomology. The sequence reads as
which can be simplified using the previous computations on projective space. For simplicity, assume the base ring is (or any algebraically closed field). Then there are the isomorphisms
which shows that of the curve is a finite dimensional vector space of rank
.
Künneth Theorem
There is an analogue of the Künneth formula in coherent sheaf cohomology for products of varieties. Given quasi-compact schemes with affine-diagonals over a field , (e.g. separated schemes), and let and , then there is an isomorphism where are the canonical projections of to .
Computing sheaf cohomology of curves
In , a generic section of defines a curve , giving the ideal sequenceThen, the long exact sequence reads asgiving Since is the genus of the curve, we can use the Künneth formula to compute its Betti numbers. This iswhich is of rankfor . In particular, if is defined by the vanishing locus of a generic section of , it is of genushence a curve of any genus can be found inside of .
Finite-dimensionality
For a proper scheme over a field and any coherent sheaf on , the cohomology groups have finite dimension as -vector spaces. In the special case where is projective over , this is proved by reducing to the case of line bundles on projective space, discussed above. In the general case of a proper scheme over a field, Grothendieck proved the finiteness of cohomology by reducing to the projective case, using Chow's lemma.
The finite-dimensionality of cohomology also holds in the analogous situation of coherent analytic sheaves on any compact complex space, by a very different argument. Cartan and Serre proved finite-dimensionality in this analytic situation using a theorem of Schwartz on compact operators in Fréchet spaces. Relative versions of this result for a proper morphism were proved by Grothendieck (for locally Noetherian schemes) and by Grauert (for complex analytic spaces). Namely, for a proper morphism (in the algebraic or analytic setting) and a coherent sheaf on , the higher direct image sheaves are coherent. When is a point, this theorem gives the finite-dimensionality of cohomology.
The finite-dimensionality of cohomology leads to many numerical invariants for projective varieties. For example, if is a smooth projective curve over an algebraically closed field , the genus of is defined to be the dimension of the -vector space . When is the field of complex numbers, this agrees with the genus of the space of complex points in its classical (Euclidean) topology. (In that case, is a closed oriented surface.) Among many possible higher-dimensional generalizations, the geometric genus of a smooth projective variety of dimension is the dimension of , and the arithmetic genus (according to one convention) is the alternating sum
Serre duality
Serre duality is an analog of Poincaré duality for coherent sheaf cohomology. In this analogy, the canonical bundle plays the role of the orientation sheaf. Namely, for a smooth proper scheme of dimension over a field , there is a natural trace map , which is an isomorphism if is geometrically connected, meaning that the base change of to an algebraic closure of is connected. Serre duality for a vector bundle on says that the product
is a perfect pairing for every integer . In particular, the -vector spaces and have the same (finite) dimension. (Serre also proved Serre duality for holomorphic vector bundles on any compact complex manifold.) Grothendieck duality theory includes generalizations to any coherent sheaf and any proper morphism of schemes, although the statements become less elementary.
For example, for a smooth projective curve over an algebraically closed field , Serre duality implies that the dimension of the space of 1-forms on is equal to the genus of (the dimension of ).
GAGA theorems
GAGA theorems relate algebraic varieties over the complex numbers to the corresponding analytic spaces. For a scheme X of finite type over C, there is a functor from coherent algebraic sheaves on X to coherent analytic sheaves on the associated analytic space Xan. The key GAGA theorem (by Grothendieck, generalizing Serre's theorem on the projective case) is that if X is proper over C, then this functor is an equivalence of categories. Moreover, for every coherent algebraic sheaf E on a proper scheme X over C, the natural map
of (finite-dimensional) complex vector spaces is an isomorphism for all i. (The first group here is defined using the Zariski topology, and the second using the classical (Euclidean) topology.) For example, the equivalence between algebraic and analytic coherent sheaves on projective space implies Chow's theorem that every closed analytic subspace of CPn is algebraic.
Vanishing theorems
Serre's vanishing theorem says that for any ample line bundle on a proper scheme over a Noetherian ring, and any coherent sheaf on , there is an integer such that for all , the sheaf is spanned by its global sections and has no cohomology in positive degrees.
Although Serre's vanishing theorem is useful, the inexplicitness of the number can be a problem. The Kodaira vanishing theorem is an important explicit result. Namely, if is a smooth projective variety over a field of characteristic zero, is an ample line bundle on , and a canonical bundle, then
for all . Note that Serre's theorem guarantees the same vanishing for large powers of . Kodaira vanishing and its generalizations are fundamental to the classification of algebraic varieties and the minimal model program. Kodaira vanishing fails over fields of positive characteristic.
Hodge theory
The Hodge theorem relates coherent sheaf cohomology to singular cohomology (or de Rham cohomology). Namely, if is a smooth complex projective variety, then there is a canonical direct-sum decomposition of complex vector spaces:
for every . The group on the left means the singular cohomology of in its classical (Euclidean) topology, whereas the groups on the right are cohomology groups of coherent sheaves, which (by GAGA) can be taken either in the Zariski or in the classical topology. The same conclusion holds for any smooth proper scheme over , or for any compact Kähler manifold.
For example, the Hodge theorem implies that the definition of the genus of a smooth projective curve as the dimension of , which makes sense over any field , agrees with the topological definition (as half the first Betti number) when is the complex numbers. Hodge theory has inspired a large body of work on the topological properties of complex algebraic varieties.
Riemann–Roch theorems
For a proper scheme X over a field k, the Euler characteristic of a coherent sheaf E on X is the integer
The Euler characteristic of a coherent sheaf E can be computed from the Chern classes of E, according to the Riemann–Roch theorem and its generalizations, the Hirzebruch–Riemann–Roch theorem and the Grothendieck–Riemann–Roch theorem. For example, if L is a line bundle on a smooth proper geometrically connected curve X over a field k, then
where deg(L) denotes the degree of L.
When combined with a vanishing theorem, the Riemann–Roch theorem can often be used to determine the dimension of the vector space of sections of a line bundle. Knowing that a line bundle on X has enough sections, in turn, can be used to define a map from X to projective space, perhaps a closed immersion. This approach is essential for classifying algebraic varieties.
The Riemann–Roch theorem also holds for holomorphic vector bundles on a compact complex manifold, by the Atiyah–Singer index theorem.
Growth
Dimensions of cohomology groups on a scheme of dimension n can grow up at most like a polynomial of degree n.
Let X be a projective scheme of dimension n and D a divisor on X. If is any coherent sheaf on X then
for every i.
For a higher cohomology of nef divisor D on X;
Applications
Given a scheme X over a field k, deformation theory studies the deformations of X to infinitesimal neighborhoods. The simplest case, concerning deformations over the ring of dual numbers, examines whether there is a scheme XR over Spec R such that the special fiber
is isomorphic to the given X. Coherent sheaf cohomology with coefficients in the tangent sheaf controls this class of deformations of X, provided X is smooth. Namely,
isomorphism classes of deformations of the above type are parametrized by the first coherent cohomology ,
there is an element (called the obstruction class) in which vanishes if and only if a deformation of X over Spec R as above exists.
Notes
References
External links
Algebraic geometry
Cohomology theories
Sheaf theory
Vector bundles
Topological methods of algebraic geometry
Complex manifolds | Coherent sheaf cohomology | [
"Mathematics"
] | 2,996 | [
"Mathematical structures",
"Fields of abstract algebra",
"Sheaf theory",
"Category theory",
"Topology",
"Algebraic geometry"
] |
41,909,048 | https://en.wikipedia.org/wiki/Toponome | The toponome is the spatial network code of proteins and other biomolecules in morphologically intact cells and tissues. It is mapped and decoded by imaging cycler microscopy (ICM) in situ able to co-map many thousand supermolecules in one sample (tissue section or cell sample at high subcellular resolution). The term "toponome" is derived from the ancient Greek nouns "topos" (τόπος: "place, position") and "nomos" (νόμος: "law"), and the term "toponomics" refers to the study of the toponome. It was introduced by Walter Schubert in 2003. It addresses the fact that the network of biomolecules in cells and tissues follows topological rules enabling coordinated actions. For example, the cell surface toponome provides the spatial protein interaction code for the execution of a cell movement, a "code of conduct". This is intrinsically dependent on the specific spatial arrangement of similar and dissimilar compositions of supermolecules (compositional periodicity) with a specific spatial order along a cell surface membrane. This spatial order is periodically repeated when the cell tries to enter the exploratory state from the spherical state (spatial periodicity). This spatial toponome code is hierarchically organized with lead biomolecule(s), anti-colocated (absent) biomolecule(s) and wildcard molecules which are variably associated with the lead biomolecule(s). It has been shown that inhibition of lead molecule(s) in a surface membrane leads to disassembly of the corresponding biomolecular network and loss of function.
Citations
Systems biology
Bioinformatics
Omics
Topology | Toponome | [
"Physics",
"Mathematics",
"Engineering",
"Biology"
] | 371 | [
"Biological engineering",
"Bioinformatics",
"Omics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Systems biology"
] |
41,909,058 | https://en.wikipedia.org/wiki/Toponomics | Toponomics is a discipline in systems biology, molecular cell biology, and histology concerning the study of the toponome of organisms. It is the field of study that purposes to decode the complete toponome in health and disease (the human toponome project)—which is the next big challenge in human biotechnology after having decoded the human genome.
A toponome is the spatial network code of proteins and other biomolecules in morphologically intact cells and tissues.
The spatial organization of biomolecules in cells is directly revealed by imaging cycler microscopy with parameter- and dimension-unlimited functional resolution. The resulting toponome structures are hierarchically organized and can be described by a three symbol code.
Etymology
The terms toponome and toponomics were introduced in 2003 by Walter Schubert based on observations with imaging cycler microscopes (ICM).
Toponome derived from the ancient Greek nouns topos (τόπος, 'place, position') and 'nomos' (νόμος, 'law'). Hence toponomics is a descriptive term addressing the fact that the spatial network of biomolecules in cells follows topological rules enabling coordinated actions.
References
Systems biology
Omics
Topology | Toponomics | [
"Physics",
"Mathematics",
"Biology"
] | 257 | [
"Bioinformatics",
"Omics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Systems biology"
] |
41,911,440 | https://en.wikipedia.org/wiki/Thermodynamic%20solar%20panel | A thermodynamic solar panel is a type of air source heat pump. Instead of a large fan to take energy from the air, it has a flat plate collector. This means the system gains energy from the sun as well as the ambient air. Thermodynamic water heaters use a compressor to transfer the collected heat from the panel to the hot water system using refrigerant fluid that circulates in a closed cycle.
Renewable Heat Incentive
In the UK, thermodynamic solar panels cannot be used to claim the Renewable Heat Incentive. This is due to the lack of technical standards for the testing and installation. The UK Microgeneration Certification Scheme is working to develop a testing standard, either based on MIS 3001 or MIS 3005 or a brand new scheme document if appropriate.
Performance
Lab testing has been carried out by Das Wärmepumpen-Testzentrum Buchs (WPZ) in Buchs Switzerland on an Energi Eco 200esm/i thermodynamic solar panel system. This showed a Coefficient of performance of 2.8 or 2.9 (depending on tank volume).
In the UK, the first independent test is under-way at Narec Distributed Energy. So far data is available for January to April 2014. As with the Carnot cycle, the achievable efficiency is strongly dependent on the temperatures on both side of the system.
References
External links
Narec Distributed Energy thermodynamic solar panel test data
Sustainable technologies
Heat pumps
Heating
Energy conservation
Building engineering | Thermodynamic solar panel | [
"Engineering"
] | 317 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
41,912,043 | https://en.wikipedia.org/wiki/Manin%20conjecture | In mathematics, the Manin conjecture describes the conjectural distribution of rational points on an algebraic variety relative to a suitable height function. It was proposed by Yuri I. Manin and his collaborators in 1989 when they initiated a program with the aim of describing the distribution of rational points on suitable algebraic varieties.
Conjecture
Their main conjecture is as follows.
Let
be a Fano variety defined
over a number field ,
let
be a height function which is relative to the anticanonical divisor
and assume that
is Zariski dense in .
Then there exists
a non-empty Zariski open subset
such that the counting function
of -rational points of bounded height, defined by
for ,
satisfies
as
Here
is the rank of the Picard group of
and
is a positive constant which
later received a conjectural interpretation by Peyre.
Manin's conjecture has been decided for special families of varieties, but is still open in general.
References
Conjectures
Diophantine geometry
Unsolved problems in number theory | Manin conjecture | [
"Mathematics"
] | 205 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
46,530,282 | https://en.wikipedia.org/wiki/Robotaxi | A robotaxi, also known as robot taxi, robo-taxi, self-driving taxi or driverless taxi, is an autonomous car (SAE automation level 4 or 5) operated for a ridesharing company.
Some studies have hypothesized that robotaxis operated in an autonomous mobility on demand (AMoD) service could be one of the most rapidly adopted applications of autonomous cars at scale and a major mobility solution, especially in urban areas. Moreover, they could have a very positive impact on road safety, traffic congestion and parking. Robotaxis could also reduce urban pollution and energy consumption, since these services will most probably use electric cars and for most of the rides, less vehicle size and range is necessary compared to individually owned vehicles. The expected reduction in number of vehicles means less embodied energy; however energy consumption for redistribution of empty vehicles must be taken into account. Robotaxis would reduce operating costs by eliminating the need for a human driver, which might make it an affordable form of transportation and increase the popularity of transportation-as-a-service (TaaS) as opposed to individual car ownership. Such developments could lead to job destruction and new challenges concerning operator liabilities. In 2023, some robotaxis caused congestion when they blocked roads due to lost cellular connectivity, and others failed to properly yield to emergency vehicles. there has been only one fatality associated with a robotaxi, a pedestrian who was hit by an Uber test vehicle in 2018.
Predictions of the widespread and rapid introduction of robotaxis – by as early as 2018 – have not been realized. There are a number of trials underway in cities around the world, some of which are open to the public and generate revenue. However, as of 2021, questions have been raised as to whether the progress of self-driving technology has stalled and whether issues of social acceptance, cybersecurity and cost have been addressed.
Status
Vehicle costs
So far all the trials have involved specially modified passenger cars with space for two or four passengers sitting in the back seats behind a partition. LIDAR, cameras and other sensors have been used on all vehicles. The cost of early vehicles was estimated in 2020 at up to US$400,000 due to custom manufacture and specialized sensors. However, the prices of some components such as LIDAR have fallen significantly. In January 2021, Waymo stated its costs were approximately $180,000 per vehicle, and its operating cost at $0.30 per mile (~$0.19 per km), well below Uber and Lyft, but this excludes the cost of fleet technicians and customer support. Baidu announced in June 2021 it would start producing robotaxis for 500,000 yuan ($77,665) each. Tesla has discussed a sub-$25,000 Tesla Robotaxi, and as of 2023 is designing an assembly line that will accommodate the vehicle.
Passenger tests
Several companies are testing robotaxi services, especially in the United States and in China. All operate only in a geo-fenced area. Service areas for robotaxis, also known as the Objective Design Domain (ODD), are specially designated zones where robotaxis can safely provide service. As of 2024, Baidu's Apollo Go had carried the most passengers, over 6 million by April 2024. Other providers in China include AutoX, DiDi, Pony.ai, WeRide, all operating in 10 or more cities. In the US, Waymo is the most prominent provider, operating in San Francisco, Phoenix, and Los Angeles. A 2024 study of Waymo indicated an 85% reduction in injury crashes per mile driven.
Separate to these efforts have been trials of larger shared autonomous vehicles on fixed routes with designated stops, able to carry between 6 and 10 passengers. These shuttle buses operate at low speeds.
Current obstacles to robotaxi
At present, it is not only technical issues that hinder the widespread use of robotaxi, but also social issues. First, consumers' concerns about the reliability and safety of self-driving taxis are a major obstacle. For example, system failures during the service process and the risk of accident perception will reduce potential users. In addition, consumers still have doubts about whether robotaxi can cope with complex urban environments or severe weather conditions.
Licenses
In February 2018 Arizona granted Waymo a Transportation Network Company permit.
In February 2022 the California Public Utilities Commission (CPUC) issued Drivered Deployment permits to Cruise and Waymo to allow passenger service in autonomous vehicles with a safety driver present in the vehicle. These carriers must hold a valid California Department of Motor Vehicles (DMV) Deployment permit and meet the requirements of the CPUC Drivered Deployment program. In June 2022, Cruise received approval to operate a commercial robotaxi service in San Francisco.
In April 2022, China gave Baidu and Pony.ai its first permits to deploy robotaxis without safety drivers on open roads within a 23 square mile area in the Beijing Economic-Technological Development Area.
In August 2023, the CPUC approved granting additional operating authority for Cruise LLC and Waymo LLC to conduct commercial passenger service using vehicles without safety drivers in San Francisco. The approval includes the ability for both companies to charge fares for rides at any time of day.
History
First trials
In August 2016, MIT spinoff NuTonomy was the first company to make robotaxis available to the public, starting to offer rides with a fleet of 6 modified Renault Zoes and Mitsubishi i-MiEVs in a limited area in Singapore. NuTonomy later signed three significant partnerships to develop its robotaxi service: with Grab, Uber’s rival in Southeast Asia, with Groupe PSA, which is supposed to provide the company with Peugeot 3008 SUVs and the last one with Lyft to launch a robotaxi service in Boston.
In August 2017, Cruise Automation, a self-driving startup acquired by General Motors in 2016, launched the beta version of a robotaxi service for its employees in San Francisco using a fleet of 46 Chevrolet Bolt EVs.
Testing and revenue service timeline
Trials listed have a safety driver unless otherwise indicated. The commencement of a trial does not mean it is still active.
August 2016 - NuTonomy launched its autonomous taxi service using a fleet of 6 modified Renault Zoes and Mitsubishi i-MiEVs in Singapore
September 2016 - Uber started allowing a select group of users in Pittsburgh, Pennsylvania to order robotaxis from a fleet of 14 vehicles. Two Uber engineers were always in the front seats of each vehicle.
March 2017 - An Uber self-driving car was hit and flipped on its side by another vehicle that failed to yield. In October 2017, Uber started using only one test driver.
April 2017 - Waymo started a large scale robotaxi tests in a geo-fenced suburb of Phoenix, Arizona with a driver monitoring each vehicle. The service area was about . In November 2017 some testing without drivers began. Commercial operations began in November 2019.
August 2017 - Cruise Automation launched the beta version robotaxi service for 250 employees (10% of its staff) in San Francisco using a fleet of 46 vehicles.
March 2018 - A woman attempting to cross a street in Tempe, Arizona at night was struck and killed by an Uber vehicle while the onboard safety driver was watching videos. Uber later restarted testing, but only during daylight hours and at slower speeds.
August 2018 - Yandex began a trial with two vehicles in Innopolis, Russia
December 2018 - Waymo started self-driving taxi service, dubbed Waymo One, in Arizona for paying customers.
April 2019 - Pony.ai launched a pilot system covering in Guangzhou, China for employees and invited affiliated, serving pre-defined pickup points.
November 2019 - WeRide RoboTaxi began a pilot service with 20 vehicles in Guangzhou and Huangpu over an area of
November 2019 - Pony.ai started a three-month trial in Irvine, California with 10 cars and stops for pickup and drop off.
April 2020 - Baidu opened its trial of 45 vehicles in Changsha, China to public users for free trips, serving 100 designated spots on a set network. Services operation from 9:20am to 4:40pm with a safety-driver and a "navigator", allowing space for two passengers in the back.
June 2020 - DiDi robotaxi service begins operation in Shanghai in an area that covers Shanghai's Automobile Exhibition Center, the local business districts, subway stations and hotels in the downtown area.
August 2020. Baidu began offering free trips, with app bookings, on its trial in Cangzhou, China which serves 55 designated spots over pre-defined routes.
December 2020. AutoX (which is backed by Alibaba Group) launched a non-public trial of driverless robotaxis in Shenzhen with 25 vehicles. The service was then opened to the public in January 2021.
February 2021 - Waymo One began limited robotaxi service in a number of suburbs of San Francisco for a selection of its own employees. In August 2021 the public was invited to apply to use service, with places limited. A safety driver is present in each vehicles. The number of vehicles involved has not been disclosed.
May 2021 - Baidu commences a commercial robotaxi service with ten Apollo Go vehicles in a area with eight pickup and drop-off stops, in Shougang Park in western Beijing
July 2021 - Baidu opened a pilot program to the public in Guangzhou with a fleet of 30 sedans serving in the Huangpu district. 200 designated spots are served between 9:30am and 11pm every day.
July 2021 - DeepRoute.ai began a free-of-charge trial with 20 vehicles in downtown Shenzhen serving 100 pickup and dropoff locations.
February 2022 - Cruise opened up its driverless cars in San Francisco to the public.
February 2023 - Zoox, the self-driving startup owned by Amazon, carried passengers in its robotaxi for the first time in Foster City, California.
August 2023 - Waymo and Cruise were authorized by the CPUC to collect fares for driverless rides in San Francisco.
December 2023, China finalized regulations on commercial robotaxi operation. Roboshuttles or robotrucks are required to maintain in-car drivers. Robotaxis can use remote operators. The robotaxi:remote operator ratio cannot exceed 3:1. Operators must be certified. Accident reporting rules specify required data.
April 2024, Baidu Apollo, AutoX, Pony.ai, Didi and WeRide each operated in 10 to 25 cities, with fleets hundreds of robotaxis. Baidu Apollo had traveled over without a major accident.
July 2024 - In Wuhan, Baidu's Apollo Go robotaxis's attempts at commercialisation have received massive attention from the social media. Its low price (Base fares start as low as 4 yuan/55 cents, compared with 18 yuan/2.48 dollar for a taxi driven by a human) was supported by some. Meanwhile, the rapid adoption of the driverless taxis has rattled China' s gig economy workforce. However, their popularity boosted Baidu's shares.
August 2024 - In most areas of Wuhan, Baidu’s Apollo Go robotaxis now operate fully autonomously without any safety personnel on board. The company recorded 899,000 rides in the second quarter of 2024, bringing the total number of rides to 7 million as of July 28, 2024.
Notable commercial ventures
Uber Advanced Technology Group
Uber began development of self-driving vehicles in early 2015. In September 2016, the company started a trial allowing a select group of users of its ride-hailing service in Pittsburgh to order robotaxis from a fleet of 14 modified Ford Fusions. The test extended to San Francisco with modified Volvo XC90s before being relocated to Tempe, Arizona in February 2017.
In March 2017, one of Uber's robotaxis crashed in self-driving mode in Arizona, which led the company to suspend its tests before resuming them a few days later. In March 2018, Uber paused self-driving vehicle testing after the death of Elaine Herzberg in Tempe, Arizona, a pedestrian struck by an Uber vehicle while attempting to cross the street, while the onboard engineer was watching videos. Uber settled with the victim's family.
In January 2021, Uber sold its self driving division, Advanced Technologies Group (ATG), to Aurora Innovation for $4 billion while also investing $400 million into Aurora for a 26% ownership stake.
Waymo
In early 2017, Waymo, the Google self-driving car project which became an independent company in 2016, started a large public robotaxi test in Phoenix using 100 and then 500 more Chrysler Pacifica Hybrid minivans provided by Fiat Chrysler Automobiles as part of a partnership between the two companies. Waymo also signed a deal with Lyft to collaborate on self-driving cars in May 2017. In November 2017, Waymo revealed it had begun to operate some of its automated vehicles in Arizona without a safety driver behind the wheel.
And in December 2018, Waymo started self-driving taxi service, dubbed Waymo One, in Arizona for paying customers.
By November 2019, the service was operating autonomous vehicles without a safety backup driver. The autonomous taxi service was operating in San Francisco as of 2021. In December 2022, the company applied for a permit to operating self-driving taxi rides in California without a human operator present as backup.
Baidu Apollo
In September 2019, Baidu's autonomous driving unit Apollo launched Apollo Go robotaxi service, with an initial fleet of 45 autonomous vehicles. Apollo Go has since expanded to more than 10 Chinese cities.
In August 2022, Baidu achieved a landmark victory in the race for autonomous vehicles by securing the first permits in China to deploy fully driverless taxis in the cities of Wuhan and Chongqing.
In May 2024, Baidu unveiled the Apollo ADFM, claimed to be the world's first Level 4 autonomous driving foundation model, along with the sixth-generation Apollo Go robotaxi, which can be produced for under $30,000. The company also said by April 2024, Apollo had accumulated over 100 million kilometers of autonomous driving without major accidents.
In August 2024, Apollo Go has deployed 400 robotaxis operating fully autonomously without any safety personnel on board in Wuhan, offering 24/7 service to 9 million residents. Baidu aims for Apollo Go to achieve operational unit breakeven in Wuhan by the end of 2024.
GM Cruise
In January 2020, GM subsidiary Cruise exhibited the Cruise Origin, a Level 4–5 driverless vehicle, intended to be used for a ride hailing service.
In February 2022, Cruise started driverless taxi service in San Francisco.
Also in February 2022, Cruise petitioned U.S. regulators (NHTSA) for permission to build and deploy a self-driving vehicle without human controls.
, the petition is pending.
In April 2022, their partner Honda unveiled its Level 4 mobility service partners to roll out in central Tokyo in the mid-2020s using the Cruise Origin.
Unfortunately, there are signs that autonomously operated Cruise vehicles may interfere with emergency vehicles, and has been culpable of at least one collision with a fire truck.
On 2 October 2023, a Cruise vehicle operating autonomously (without driver supervision) collided with a pedestrian. Instead of stopping immediately, the vehicle misidentified the collision mechanics and presumed it was crashed into from the side. Consequently, the vehicle proceeded to drag the pedestrian under the car for until it came to a stop on the side of the road. As both the response of the vehicle was deemed unacceptable and the company appears to have withheld details of the crash from regulators, California regulators revoked the license to operate these cars. Cruise recalled all of its 950 vehicles in November 2023.
These decisions were enacted in parallel with the exposure of safety risks, identified earlier within the Cruise company, regarding proper vehicle behavior around children and around construction sites.
Tesla
Since 2019, Tesla's CEO Elon Musk has incorrectly predicted each year that Tesla would have robotaxis on the road within 1 to 2 years. He was expected to announce the plans for Tesla's robotaxi on 8 August 2024, but the event was moved to 10 October 2024. During that event Tesla demonstrated two new vehicles, the two-seater Tesla Cybercab and the 14-seater (plus standing room) Tesla Robovan, which can carry up to 20 passengers. The company also reiterated that all of their other models of cars and pickup trucks would be usable as robotaxis after a software update and regulatory approval, which they expected at the earliest in California and Texas in 2025.
Other developments
Many automakers announced their plans in 2015–2018 to develop robotaxis before 2025 and specific partnerships have been signed between automakers, technology providers and service operators, including:
The startup Zoox announcing in 2015 its ambition to build a robotaxi from scratch.
BMW and Fiat Chrysler Automobiles partnering in 2016 with Intel and Mobileye to develop robotaxis by 2021.
Baidu partnering in 2016 with Nvidia to develop autonomous cars and robotaxis.
Daimler AG teaming up with Bosch in 2017 to develop the software for a robotaxi service by 2025.
The Renault-Nissan-Mitsubishi Alliance partnering in 2017 with Transdev and DeNA to develop robotaxi services within 10 years.
Honda releasing in 2017 an autonomous concept car, NeuV, that aims at being a personal robotaxi.
Ford Motor's plan in 2017 to develop a robotaxi by 2021 through partnerships with several startups.
Ford Motor investing $1 billion in the startup Argo AI in 2017 to develop autonomous cars and robotaxis; the startup was disbanded in 2022 by Ford.
Lyft and Ford partnering in 2017 to add Ford's self-driving cars to Lyft's ride-hailing network; Google leading a $1 billion investment in 2017 in Lyft which could support Waymo's robotaxi strategy; in 2021, Lyft's self-driving division was sold to Toyota.
Delphi buying the startup NuTonomy for $400 million in 2017.
Parsons Corporation announcing in 2017 a partnership with automated mobility operating system company Renovo.auto to deploy and scale AMoD services.
Didi Chuxing partnering in 2018 with the Renault-Nissan-Mitsubishi Alliance and other automakers to explore the future launch of robotaxi services in China.
See also
Self-driving car
References
Automotive technologies
Robotics
Transport culture
Taxis | Robotaxi | [
"Physics",
"Engineering"
] | 3,798 | [
"Transport culture",
"Self-driving cars",
"Automation",
"Physical systems",
"Transport",
"Automotive engineering",
"Robotics"
] |
46,530,680 | https://en.wikipedia.org/wiki/Carleman%27s%20equation | In mathematics, Carleman's equation is a Fredholm integral equation of the first kind with a logarithmic kernel. Its solution was first given by Torsten Carleman in 1922.
The equation is
The solution for b − a ≠ 4 is
If b − a = 4 then the equation is solvable only if the following condition is satisfied
In this case the solution has the form
where C is an arbitrary constant.
For the special case f(t) = 1 (in which case it is necessary to have b − a ≠ 4), useful in some applications, we get
References
CARLEMAN, T. (1922) Uber die Abelsche Integralgleichung mit konstanten Integrationsgrenzen. Math. Z., 15, 111–120
Gakhov, F. D., Boundary Value Problems [in Russian], Nauka, Moscow, 1977
A.D. Polyanin and A.V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998.
Fredholm theory
Integral equations | Carleman's equation | [
"Mathematics"
] | 215 | [
"Mathematical objects",
"Integral equations",
"Equations"
] |
46,532,824 | https://en.wikipedia.org/wiki/Intermetallic%20particle | Intermetallic particles form during solidification of metallic alloys.
Aluminium alloys
Al-Si-Cu-Mg alloys
Al-Si-Cu-Mg alloys form Al5FeSi- plate like intermetallic phases like -Al8Fe2Si, Al2Cu, etc. The size and morphology of these intermetallic phases in these alloys control the mechanical properties of these alloys, especially strength and ductility. The size of these phases depends on the secondary dendrite arm spacing, as well as the Si content of the alloy, of the primary phase in the micro structure.
Phases and crystal structures
Magnesium alloys
WE 43
In-situ synchrotron diffraction experiment on Electron alloy-WE 43 (Mg4Y3Nd) shows that this alloy form the following intermetallic phases ;Mg12Nd, Mg14Y4Nd, and Mg24Y5.
Phases and crystal structures
AZ 91
References | Intermetallic particle | [
"Physics",
"Chemistry",
"Materials_science"
] | 191 | [
"Inorganic compounds",
"Metallurgy",
"Inorganic compound stubs",
"Alloys",
"Intermetallics",
"Condensed matter physics"
] |
46,535,730 | https://en.wikipedia.org/wiki/FMN-binding%20fluorescent%20protein | A FMN-binding fluorescent protein (FbFP), also known as a LOV-based fluorescent protein, is a small, oxygen-independent fluorescent protein that binds flavin mononucleotide (FMN) as a chromophore.
They were developed from blue-light receptors (so called LOV-domains) found in plants and various bacteria. They complement the GFP-derivatives and –homologues and are particularly characterized by their independence of molecular oxygen and their small size. FbFPs absorb blue light and emit light in the cyan-green spectral range.
Development
LOV-domains are a sub-class of PAS domains and were first identified in plants as part of Phototropin, which plays an essential role in the plant's growth towards light. They noncovalently bind Flavin mononucleotide (FMN) as cofactor. Due to the bound FMN LOV-domains exhibit an intrinsic fluorescence, which is however very weak. Upon illumination with blue light, LOV-domains undergo a photocyle, during which a covalent bond is formed between a conserved cysteine-residue and the FMN. This causes a conformational change in the protein that is necessary for signal propagation and also leads to the loss of fluorescence. The covalent bond is energetically unfavorable and is cleaved with a protein specific time constant ranging from seconds to hours.
In order to make better use of the fluorescence properties of these proteins, the natural photocycle of these LOV-domains was abolished by exchanging the conserved cysteine against an alanine on a genetic level.
Thus, upon blue light irradiation, the protein remains in the fluorescent state and also exhibits a brighter fluorescence.
The first FbFPs that were generated in this fashion were subsequently further optimized using different methods of mutagenesis. Especially the brightness but also the photostability of the proteins were enhanced and their spectral characteristics altered.
Spectral characteristics
Typically FbFPs have an excitation maximum at a wavelength of approximately 450 nm (blue light) and a second distinct excitation peak around 370 nm (UV-A light). The main emission peak is at approx. 495 nm, with a shoulder around 520 nm. One variant of Pp2FbFP (Q116V) exhibits a 10 nm blue shift in both the excitation and emission spectra. Rationally designed variants of iLOV and CagFbFP exhibit 6 and 7 nm red shifts, respectively.
Photophysical properties
The photophysical properties of the FbFPs are determined by the chromophore itself and its chemical surrounding in the protein. The extinction coefficient (ε) is around 14.200 M−1cm−1 at 450 nm for all described FbFPs, which is slightly higher than that of free FMN (ε = 12.200 M−1cm−1 ). The Fluorescence-Quantum yield (Φ) varies significantly between different FbFPs and ranges from 0.2 (phiLOV2.1) to 0.44 (EcFbFP and iLOV). This represents an almost twofold increase compared to free FMN (Φ = 0.25).
The difference to free FMN is even more significant in the case of the photostabaility, the proteins resistance to bleach out during prolonged and intense irradiation with blue light. Based on the bleaching-halftime (the times it takes to reduce the initial fluorescence intensity to 50% upon illumination) the genetically engineered variant phiLOV2.1 is approximately 40x as stable as free FMN. This stabilizing effect can be observed for almost all FbFPs, although it is usually in the range of 5x - 10x.
The average fluorescence lifetime of FbFPs is in the range of 3.17 (Pp2FbFP) and 5.7 ns (e.g. EcFbFP). They are thereby much longer than the ones of GFP derivatives, which are usually between 1,5 and 3 ns.
FbFPs are therefore well suited as donor domains in Förster resonance energy transfer (FRET) systems in conjunction with GFP derivatives like YFP as acceptor domains.
Advantages and disadvantages
The main advantage of FbFPs over GFP is their independence of molecular oxygen. Since all GFP derivatives and homologues require molecular oxygen for the maturation of their chromophore, these fluorescent proteins are of limited use under anaerobic or hypoxic conditions.
Since FbFPs bind FMN as chromophore, which is synthesized independently of molecular oxygen, their fluorescence signal does not differ between aerobic and anaerobic conditions.
Another advantage is the small size of FbFPs, which is typically between 100 and 150 amino acids. This is about half the size of GFP (238 amino acids). It could for example be shown that this renders them superior tags for monitoring tobacco mosaic virus infections in tobacco leaves.
Due to their extraordinary long average fluorescence lifetime of up to 5.7 ns they are also very well suited for the use as donor domains in FRET systems in conjunction with e.g. YFP (see photophysical properties). A fusion of EcFbFP and YFP was e.g. used to develop the first genetically encoded fluorescence biosensor for oxygen (FluBO)
The main disadvantage compared to GFP variants is their lower brightness (the product of ε and Φ). The commonly used EGFP (ε = 55,000 M−1cm−1; Φ = 0.60 ) for example is approximately five times as bright as EcFbFP.
Another disadvantage of the FbFPs is the lack of color variants to tag and distinguish multiple proteins in a single cell or tissue. The largest spectral shift reported for FbFPs so far is 10 nm. Although this variant (Pp2FbFP Q116V) can be visually distinguished from the others with the human eye, the spectral differences are too small for fluorescence microscopy filters.
References
Recombinant proteins
Protein imaging
Protein methods
Cell imaging
Fluorescent proteins
Bioluminescence | FMN-binding fluorescent protein | [
"Chemistry",
"Biology"
] | 1,261 | [
"Biochemistry methods",
"Luminescence",
"Biotechnology products",
"Protein methods",
"Protein biochemistry",
"Fluorescent proteins",
"Recombinant proteins",
"Microscopy",
"Biochemistry",
"Bioluminescence",
"Cell imaging",
"Protein imaging"
] |
48,108,635 | https://en.wikipedia.org/wiki/Refuge%20Water%20Supply%20Program | The Refuge Water Supply Program (RWSP) is administered by the United States Department of the Interior jointly by the Bureau of Reclamation and Fish and Wildlife Service and tasked with acquiring a portion and delivering a total of 555,515 acre feet (AF) of water annually to 19 specific protected wetland areas in the Central Valley of California as mandated with the passing of the Central Valley Project Improvement Act signed on October 30, 1992, by President George H. W. Bush.
Background
The Central Valley Project (CVP)
The Central Valley (California) once contained over 4 million acres of naturally occurring wetlands that provided habitat: land, food, and shelter for resident and migratory birds and wildlife. The Central Valley, historically and today, constitutes a significant portion of the Pacific Flyway used by millions of migrating birds each year.
The Central Valley's winter flood-prone geography and summer dry climate were natural constraints to permanent human settlement. The Central Valley Project (CVP); an interconnected engineered system of reservoirs, aqueducts, and flood control measures, constructed by the US Bureau of Reclamation; managed flooding and provided reliable water supplies year-round with highly managed and calculated water storage, release and conveyance infrastructure. Along with the construction of similar facilities by others, the CVP's flood control and water delivery systems created a stable environment suitable for permanent human development in the Central Valley.
Controlling and manipulating the water supply for human benefit dramatically and quickly transformed the landscape. All but 400,000 acres of natural wetlands were transformed for development, a reduction in wetland area of 90%. The loss of wetlands concentrated the migrating and resident wildlife on less land and required their sharing of and dependence on less water. This unhealthy crowding caused bird populations to decline as they suffered from disease and the lack of necessary food, shelter, and water. Compounding the problem, human activity, in some cases, polluted the waters that flowed into the remaining wetlands. The Kesterson Reservoir disaster provided a clear indication that wildlife was suffering in the modified Central Valley and helped inspire actions to mitigate the CVP's effects on bird and fish populations.
Central Valley Project Improvement Act
The Central Valley Project Improvement Act (CVPIA) was signed into law on October 30, 1992, as mitigation and remedy for some of the CVP's adverse environmental effects, specifically, to increase the population and improve the health of the Central Valley's anadromous fish and increase the acreage and health of wetlands used by migratory birds and other resident wildlife. The CVPIA is managed by the United States Department of Interior through collaboration between the Bureau of Reclamation and the Fish and Wildlife Service.
Refuge Water Supply Program
CVPIA Section 3406(d) mandates that 555,515 AF of water of suitable quality be delivered to maintain and improve wetland habitat areas in 19 wetland areas specifically identified in the Report on Refuge Water Supply Investigations (March 1989) and the San Joaquin Basin Action Plan/Kesterson Mitigation Action Plan (December 1989), collectively referred to as 'the Refuges'. These Refuges comprise nearly 200,000 acres of wetlands and as such represent almost 50% of the wetlands remaining in California's Central Valley. Reclamation created the Refuge Water Supply Program (RWSP) to manage and administer the activities necessary to ensure the acquisition and delivery of this water as required under this section. Like the CVPIA, the RWSP is administered jointly by the Bureau of Reclamation (from the Mid-Pacific Regional Office in Sacramento, CA) and the Fish and Wildlife Service (from the Pacific Southwest Regional Office in Sacramento, CA)
The Refuges
National Wildlife Refuges
The following Refuges, benefiting from CVPIA legislation, are administered by the Department of the Interior, Fish and Wildlife Service as National Wildlife Refuges. In some instances, the specific Refuge named in the CVPIA is currently a constituent part of an FWS administrative complex of refuges that includes several such refuges and/or other non-benefiting lands.
The following CVPIA benefiting refuges are components of the Sacramento National Wildlife Refuge Complex: Sacramento National Wildlife Refuge, Delevan National Wildlife Refuge, Colusa National Wildlife Refuge, Sutter National Wildlife Refuge.
The following CVPIA benefiting Refuges are components of the San Luis National Wildlife Refuge Complex: San Luis Unit, West Bear Creek Unit, East Bear Creek Unit, Kesterson Unit, Freitas Unit and Merced National Wildlife Refuge. The refuges currently identified as 'Units' were separate refuges at the time the legislation was written and passed.
The following CVPIA benefiting Refuges are components of the Kern National Wildlife Refuge Complex: Kern National Wildlife Refuge and Pixley National Wildlife Refuge
California State Wildlife Areas
The following Refuges, benefiting from CVPIA legislation, are administered by the State of California, Department of Fish and Wildlife as Wildlife Areas. In some instances, the specific Refuge named in the CVPIA is currently a part of a DFW administrative unit that includes several such refuges and/or other non-benefiting lands.
Gray Lodge Wildlife Area; Los Banos Wildlife Area (portion); the following currently designated 'units' of the North Grasslands Wildlife Area, China Island Unit and Salt Slough Unit; and the Volta Wildlife Area (portion). Note: 'portion', is used to indicate that the current existing wildlife area boundary is larger than it was in the defining report. CVP water obligated for the RWSP is only permitted to be used on that portion of the wildlife area specifically described in the defining report and legislation.
The Grasslands Resource Conservation District
The Grassland Resource Conservation District (GRCD) comprises 75,000 acres of land including: the Grassland Water District (GWD) which provides water to 165 hunting clubs; the Kesterson and Freitas Units of the San Luis National Wildlife Refuge (NWR); Volta Wildlife Management Area (WMA); Los Banos WMA; and privately owned wetlands. As such, the GRCD includes 60,000 acres of privately owned hunting clubs, 12,000 acres of land owned by the Federal and state governments, and 3,000 acres of cropland. The federal and state Refuges identified in the CVPIA legislation that are within the GRCD do not share GRCD's water allocation.
The Water
The water associated with the program is either categorized as Level 2 or Incremental Level 4 and there are different supply quantities and characteristics of each. The goal of the program is to provide the 'Full Level 4' water quantity which is the cumulative sum of the full quantity of each category for each Refuge. Program-wide, typically between 75% and 85% of Full Level 4 is delivered annually.
The water the RWSP provides accounts for varying portions of an individual Refuge's total water supply. Because some Refuges do not have adequate conveyance capacity to them (Pixley NWR, Merced NWR, Sutter NWR, East Bear Creek Unit and Gray Lodge WA) delivered water supplies vary annually with hydrological and climatic conditions. Construction projects enabling these Refuges to receive water supplies have been identified and in some cases are progressing but funding limitations will likely cause this condition to persist.
Full Level 4
The amount of water identified as being required for the optimal management of a designated wetland is defined as that refuge's 'Full Level 4' quantity. The 555,515AF of water the RWSP is tasked with providing is the sum of all of the specified refuges' Full Level 4 quantities. Full Level 4 is a contractually obligated amount of water that consists of 2 blocks, Level 2 and Incremental Level 4. Each refuge has a 'Full Level 4' quantity which is the sum of its total Level 2, and total Incremental Level 4 quantity of water. These amounts are provided in the table "RWSP Contract Water Quantities".
Level 2 Water
Each of the 19 benefiting Refuges has its own Level 2 water quantity which is based on the average water supplies necessary to maintain the wetland areas in existence prior to the passing of the CVPIA or equate to its prior dependably delivered quantity (regardless of water quality) and collectively totals 422,251 AF. For this reason, the delivery of a Refuge's Level 2 allocation is considered to be essential for a Refuge's successful operation.
For those refuges that have the infrastructure to receive it, Level 2 water comes from the CVP, meaning a fixed portion of the federal water supply stored and delivered by the CVP Project is automatically dedicated annually for Refuge use and thus provides a perennially reliable water source. The RWSP manages and funds several long-term contracts (5 – 40 years) with a variety of water agencies to convey this water from its CVP source to a Refuges' boundaries.
It is important to note that the individual Refuges determine the amount of this water to be delivered, per month, at their discretion. This is a unique condition because most CVP water contracts impose limitations on both the total monthly delivery amount and the months in which deliveries may occur.
Incremental Level 4 Water
The incremental difference between the Refuges' Full Level 4 allocation and its Level 2 allocation defines Incremental Level 4 (IL4) and represents the quantity of water necessary for Refuges to ideally manage all lands identified in the refuge reports for the benefit of waterfowl. In most cases, IL4 water is needed to fully support an expanded wetland footprint. Like Level 2 water, each refuge has its own Incremental Level 4 quantity but unlike Level 2 supplies, this water is not dedicated from CVP supply and must be acquired from other sources, such as willing sellers or from those relinquishing their federal or state supplies. The RWSP manages and funds contracts of varied duration to acquire and convey this water from its source to the refuges' boundaries. The suppliers, availability, and cost of water available as Incremental Level 4 are less predictable than Level 2 supplies because of unpredictable region-wide water needs and usage; the potential lack of sufficient conveyance infrastructure; inconsistent annual natural conditions, specifically rainfall; and occasional water quality concerns.
Additionally, Individual refuges retain the right to refuse to accept water that the RWSP has the ability to acquire if it is not of suitable quality or does not benefit the refuge at the time it is available. Thus, water supplied delivered in a year may be less than those that were potentially available.
RWSP Program Components
The RWSP's efforts are concentrated into 3 components, Water Acquisition, Facility Construction and Water Conveyance.
Water Acquisitions
CVPIA Section 3406(d)(2) requires the acquisition of IL4 Water for critical wetland habitat supporting resident and migratory waterfowl, threatened and endangered species, and wetland-dependent aquatic biota on the Refuges. These supplies are ideally used to allow refuges to optimally manage the preserved land for the improvement of waterfowl populations.
IL4 water consists of long-term and annual purchases from willing sellers of both surface and groundwater supplies; supplies at no cost, e.g., water exchanges; water delivered under a mitigation agreement with the Federal Energy Regulatory Commission and 'permanent water', water that the program has purchased a permanent right to take under specific conditions.
North Valley Regional Recycled Water Program
California, and the Central Valley, experienced persistent drought conditions for much of the early part of the twenty-first century. With global warming expected to alter historic conditions, the long-term availability of IL4 supplies is questionable. To meet its acquisition and delivery mandate the RWSP was challenged to find reliable and affordable sources for Incremental 4 water to deliver to the refuges well into this uncertain future. The North Valley Regional Recycled Water Program will make treated, recycled water from the Cities of Turlock and Modesto available for re-use at the Refuges. The RWSP took an active role in this pioneering program's development and in return, in 2016, signed a 40-year contract for water deliveries from it.
Facility Construction
CVPIA Section 3406(d)(5) provides for facilities construction to benefit the mandate of supplying refuge water. This component funds projects that identify, construct and/or maintain infrastructure projects supporting the long-term delivery of firm, reliable water supplies to the boundary of the Refuges. The RWSP's goal is to have the necessary facilities in place allowing for the delivery of Full L4 water supplies to the Refuges that meet their timing and scheduling requirements. A total of 46 new or modifications to major structures and/or actions were identified to provide needed capacity for the delivery of Full Level 4 surface supplies to these refuges.
Water Conveyance (Wheeling)
CVPIA Sections 3406(d)(1),(2) and (5) of the CVPIA describe the functions and responsibilities of the Refuge Water Conveyance (Wheeling) Component. The use of a water conveyance facility by someone other than the owner or operator to transport water is referred to as "wheeling." The conveyance component is responsible for ensuring the delivery of a refuge's level 2 and acquired water supplies through contracts and agreements that allow for these water supplies to move from source to refuge destination.
The reservoirs that hold water destined for a refuge are connected to the refuges by a network of channels, owned and operated by multiple entities. Similar to a network of roadways there are conveyance channels of many kinds (with names like aqueduct, canal, slough, and ditch) and sizes. Some channels are free to use by the RWSP, like rivers, and some require payment, such as those built and maintained by a water district. The RWSP negotiated contracts coordinating the delivery of water. Reclamation currently has nine long-term (15–50 years) conveyance agreements that are administered by the RWSP, one Service 40-year conveyance agreement, cooperative agreements to reimburse delivering entities for costs of conveying L2 and IL4 water supplies through federal, state, and private water distribution systems to the refuges and agreements to reimburse costs for groundwater pumping in instances where groundwater is pumped at the refuge itself. Deliveries are monitored throughout the system as water enters and exits metered channels.
Water that is transferred any distance suffers what is termed 'conveyance losses' which means that the amount of water released at the start is not the same amount that ultimately arrives. The difference between what is sent versus what is received is conveyance loss and can be the result of evaporation or seepage (soaking into the land). In some cases, water travels over 300 miles to reach its final refuge destination.
Accomplishments and Benefits to Nature
The CVPIA was enacted to increase the population and improve the health of the Central Valley's anadromous fish and increase the acreage and health of wetlands used by migratory birds and other resident wildlife both of which suffered as a result of the construction of the CVP. The RWSP focuses on the health of wetlands by acquiring and conveying necessary water supplies. Since CVPIA was enacted numerous biological benefits have resulted from supplying the Refuges with a reliable year-round water supply that adequately meets the refuge-specific water delivery schedules developed for individual wetland management requirements. Prior to CVPIA, refuge managers had to concentrate the majority of their water use in the fall and early winter months, when Central Valley waterfowl numbers peaked. With the passage of CVPIA, the habitat calendar was expanded to the full year. These increased and reliable supplies of water enable managers to enhance existing habitats, expand their wetland base, and provide increased benefits to a greater number of wetland-dependent species.
Habitat and Biodiversity
Refuge activities are water dependent. Having a firm and adequate supply of water available when most beneficial throughout the year enables managers to implement improved management techniques and allows them to better manage lands and activities. This increased efficiency and reliability have both increased wetland acreage and improved wetland health by affording refuge managers with the ability to manage a diverse mix of habitat types that more fully satisfied the year-round environmental requirements of many wildlife species.
Benefits of Water Reliability
Before the CVPIA, refuge managers concentrated the majority of water use in the fall and early winter months (October - December), when Central Valley waterfowl numbers peaked. With the passage of CVPIA, the habitat calendar was expanded to the full year, allowing refuge managers' the ability to provide habitat to an extended group of migratory birds as well as other wildlife and grow plant materials that provide a food supply or habitat for food sources. Under CVPIA programs, moist soil food plant irrigations are carried out since water is reliably available during August and September to satisfy the needs of the early arriving migrant waterfowl and shorebirds, maintenance flows are applied throughout the winter months to improve water quality and decrease avian disease outbreaks, and during spring and summer, when wetland habitat can be particularly limited by hydrology, water provides critical nesting habitat for waterfowl and colonial birds as well as habitat for resident wildlife and their young. Wintering wildlife also benefits from this habitat diversity, as seasonal wetlands are now managed to coincide with peak migration times of shorebirds and waterfowl.
With the increased frequency and acreage of irrigated moist soil food plants, there has been a doubling in desirable plant biomass, which equates to more high-quality, high-energy food available to waterfowl. In some refuges, waterfowl food production has increased tenfold. Timely de-watering and irrigation promote the germination and irrigation of important moist-soil food plants, such as swamp timothy and watergrass. These plants provide a high-energy food source through both their seeds and associated invertebrate communities. The increase in supply reliability allows wetland managers to lower water depths to make seeds and invertebrates available without the fear of having wetlands completely evaporate.
Benefits of Increased Wetland Acreage
Waterfowl, shorebirds, and other wetland-dependent wildlife have benefited as their habitats have been expanded and enhanced. Central Valley wetlands receiving CVP water supplies have increased by more than 20,000 acres since the passage of the CVPIA while tens of thousands of acres of habitat have been enhanced. This wetland acreage helps explain the 75% decrease in waterfowl disease-related mortality in some wetland areas as the birds spread out over a greater area.
Benefits of Improved Water Quality
At least as important as the increase in acreage, improvement in the quality of previously existing wetlands that has resulted from the delivery of suitable water. Increasing water supplies to wetlands also has the effect of improving water quality, both on and off refuges. For example, providing firm, quality water supplies has reduced the exposure of waterfowl and shorebirds spending the winter in the Grasslands area of the valley to contaminants. In a report on selenium in aquatic birds from the Central Valley, 1986-1994, FWS scientists noted that application of freshwater resulted in the decline of selenium contamination. Improved maintenance water flows through refuge ponds to improve water quality and reduce avian disease. The ability to battle avian disease outbreaks, such as botulism and cholera, is greatly assisted by applying additional water and creating a "flow-through" system of water delivery and drainage. This "flow-through" also helps deal with wetland areas high in salinity, which are often lower in productivity and diversity. CVPIA water allows wetland managers to "flush" salts from wetland basins and improve soil quality.
Migratory Waterfowl
Since the passage of the CVPIA, Sacramento Valley areas receiving CVP water have seen a 20% increase in waterfowl use and a significant decline in water-borne wildlife diseases. Waterfowl use in the early fall has recorded increases of 800 percent, from 2 million to over 18 million waterfowl use days per year.
White-faced Ibis and Sandhill Cranes are excellent examples of how the availability of adequate water supplies enabled refuge managers to provide habitat for endemic species that had been in severe decline for decades. Improved water supplies first led to an increase in the numbers of frogs, snails, aquatic insects, and small fish. This, in turn, provided the ibis and cranes with habitat for late-spring and summer nesting, essential components for these species. The increased and improved breeding habitat resulted in a steady upswing in bird numbers.
This, in turn, provided the ibis and cranes with habitat for late-spring and summer nesting, essential components for these species. The increased and improved breeding habitat resulted in a steady upswing in bird numbers.
Other Wildlife
Wetlands are diverse ecosystems. The increased wetland acreage and improved water quality have benefited more than bird populations. Improved water supplies led to an increase in the numbers of frogs, snails, aquatic insects, and small fish. Habitat that is now available during the months of August and early September is utilized by resident wildlife and their young during a critical time of the year when wetland habitat is otherwise reduced. Introducing water for semi-permanent and permanent wetland habitats in the spring and summer directly benefits the recovery of special status species such as the giant garter snake and tri-colored blackbirds.
Budget
The CVPIA is a federal program and receives funding every year through Congressional appropriation. It would require between $50 - $60 Million annually to achieve the goals of the RWSP. The 2017 budget provided approximately $22 million.
References
1992 establishments in California
Joint ventures
United States Bureau of Reclamation
United States Fish and Wildlife Service
Central Valley Project | Refuge Water Supply Program | [
"Engineering"
] | 4,373 | [
"Irrigation projects",
"Central Valley Project"
] |
33,658,868 | https://en.wikipedia.org/wiki/Point%20of%20care | Clinical point of care (POC) is the point in time when clinicians deliver healthcare products and services to patients at the time of care.
Clinical documentation
Clinical documentation is a record of the critical thinking and judgment of a health care professional, facilitating consistency and effective communication among clinicians.
Documentation performed at the time of clinical point of care can be conducted using paper or electronic formats. This process aims to capture medical information pertaining to patient's healthcare needs. The patient's health record is a legal document that contains details regarding patient's care and progress. The types of information captured during the clinical point of care documentation include the actions taken by clinical staff including physicians and nurses, and the patient's healthcare needs, goals, diagnosis and the type of care they have received from the healthcare providers.
Such documentations provide evidence regarding safe, effective and ethical care and insinuates accountability for healthcare institutions and professionals. Furthermore, accurate documents provide a rigorous foundation for conducting appropriate quality of care analysis that can facilitate better health outcomes for patients. Thus, regardless of the format used to capture the clinical point of care information, these documents are imperative in providing safe healthcare. Also, it is important to note that electronic formats of clinical point of care documentation are not intended to replace existing clinical process but to enhance the current clinical point of care documentation process.
Traditional approach
One of the major responsibilities for nurses in healthcare settings is to forward information about the patient's needs and treatment to other healthcare professionals. Traditionally, this has been done verbally. However, today information technology has made its entrance into the healthcare system whereby verbal transfer of information is becoming obsolete. In the past few decades, nurses have witnessed a change toward a more independent practice with explicit knowledge of nursing care. The obligation to point of care documentation not only applies to the performed interventions, medical and nursing, but also impacts the decision-making process; explaining why a specific action has been prompted by the nurse. The main benefit of point of care documentation is advancing structured communication between healthcare professionals to ensure the continuity of patient care. Without a structured care plan that is closely followed, care tends to become fragmented.
Electronic documentation
Point of care (POC) documentation is the ability for clinicians to document clinical information while interacting with and delivering care to patients. The increased adoption of electronic health records (EHR) in healthcare institutions and practices creates the need for electronic POC documentation through the use of various medical devices. POC documentation is meant to assist clinicians by minimizing time spent on documentation and maximizing time for patient care. The type of medical devices used is important in ensuring that documentation can be effectively integrated into the clinical workflow of a particular clinical environment.
Devices
Mobile technologies such as personal digital assistants (PDAs), laptop computers and tablets enable documentation at the point of care. The selection of a mobile computing platform is contingent upon the amount and complexity of data. To ensure successful implementation, it is important to examine the strengths and limitations of each device. Tablets are more functional for high volume and complex data entry, and are favoured for their screen size, and capacity to run more complex functions. PDAs are more functional for low volume and simple data entry and are preferred for their lightweight, portability and long battery life.
Electronic medical record
An electronic medical record (EMR) contains patient's current and past medical history. The types of information captured within this document include patient's medical history, medication allergies, immunization statuses, laboratory and diagnostic test images, vital signs and patient demographics. This type of electronic documentation enables healthcare providers to use evidence-based decision support tools and share the document via the Internet. Moreover, there are two types of software included within EMR: practice management and EMR clinical software. Consequently, the EMR is able to capture both the administrative and clinical data.
Computer physician order entries
A computerized physician order entry allows medical practitioners to input medical instructions and treatment plans for the patients at the point of care. CPOE also enable healthcare practitioners to use decision support tools to detect medication prescription errors and override non-standard medication regimes that may cause fatalities. Furthermore, embedded algorithms may be chosen for people of certain age and weight to further support the clinical point of care interaction. Overall, such systems reduce errors due to illegible writing on paper and transcribing errors.
Mobile EMRs mHealth
Mobile devices and tablets provide accessibility to the Electronic Medical Record during the clinical point of care documentation process. Mobile technologies such as Android phones, iPhones, BlackBerrys, and tablets feature touchscreens to further support the ease of use for the physicians. Furthermore, mobile EMR applications support workflow portability needs due to which clinicians can document patient information at the patient's bedside.
Advantages
Workflow
The use of POC documentation devices changes clinical practice by affecting workflow processes and communication. With the availability of POC documentation devices, for example, nurses can avoid having to go to their deskspace and wait for a desktop computer to become available. They are able to move from patient to patient, eliminating steps in work process altogether. Furthermore, redundant tasks are avoided as data is captured directly from the particular encounter without the need for transcription.
Safety
A delay between face-to-face patient care and clinical documentation can cause corruption of data, leading to errors in treatment. Giving clinicians the ability to document clinical information when and where care is being delivered allows for accuracy and timeliness, contributing to increased patient safety in a dynamic and highly interruptive environment. Point of care documentation can reduce errors in a variety of clinical tasks including diagnostics, medication prescribing and medication administration.
Collaboration and communication
Ineffective communication among patient care team members is a root cause of medical errors and other adverse events. Point of care documentation facilitates the continuity of high quality care and improves communication between nurses and other healthcare providers. Proper documentation at the point of care can optimize flow of information among various clinicians and enhances communication. Clinicians can avoid going to a workstation and can access patient information at the bedside. It will also enable the timeliness of documentation, which is important to prevent adverse events.
Nurse-patient time
Literature from various studies show that approximately 25-50% of a nurse's shift is spent on documentation. As most documentation is done in the traditional manner, that is using paper and pen, enabling a POC documentation device could potentially allow 25-50% more time at the bedside. Using speech recognition and information has been studied . as a way to support nurses in POC documentation with encouraging results: 5276 of 7277 test words were recognised correctly and information extraction achieved the F1 of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the nursing handover form with our 101 test documents.
Disadvantages
Complexities
Numerous point of care documentation systems produce data redundancies, inconsistencies and irregularities of charting. Some electronic formats are repetitious and time-consuming. Moreover, some point of care documentation from one setting to another without a standardized pattern, and there are no guidelines for standards to documenting. Inaccessibility also causes time to be lost in searching for charts. These issues all lead to wasted time, increasing costs and uncomfortable charting. A study adopted both qualitative and quantitative methods have confirmed complexities in point of care documentation. The study has also categorized these complexities into three themes: disruption of documentation; incompleteness in charting; and inappropriate charting. As a result, these barriers limit nurses competence, motivation and confidence; ineffective nursing procedures; and inadequate nursing auditing, supervision and staff development.
Privacy and security
When examining the use of any type of technology in healthcare its important to remember that technology holds private personal health information. As such, security measures need to be in place to minimize the risk for breaches of privacy and patient confidentiality. Depending on the country you live in its important to ensure that legislation standards are met. According to Collier in 2012, privacy and confidentiality breaches are rising largely attributed to the lack of appropriate encryption technology. For successful implementation of any health technologies it is vital to ensure adequate security measures are used such as strong encryption technology.
Countries
Canada
Ontario
The adoption of electronic formats of clinical point of care documentation is particularly low in Ontario. Consequently, provincial leaders such as eHealth Ontario and Ontario MD provide financial and technical assistance in supporting electronic documentation of clinical point of care through EMR. Furthermore, currently more than six million Ontarians have EMR; however, by 2012 this number is expected to increase to 10 million citizens. Conclusively, continued efforts are being made to adopt charting of patient information in electronic format to improve the quality of clinical point of care services
See also
Adoption of Electronic Medical Records in U.S. Hospitals
Personal health record
Point-of-care testing
References
Practice of medicine
Health care
Health informatics | Point of care | [
"Biology"
] | 1,812 | [
"Health informatics",
"Medical technology"
] |
25,029,425 | https://en.wikipedia.org/wiki/Concolic%20testing | Concolic testing (a portmanteau of concrete and symbolic, also known as dynamic symbolic execution) is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution (testing on particular inputs) path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs (test cases) with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness.
A description and discussion of the concept was introduced in "DART: Directed Automated Random Testing" by Patrice Godefroid, Nils Klarlund, and Koushik Sen. The paper "CUTE: A concolic unit testing engine for C", by Koushik Sen, Darko Marinov, and Gul Agha, further extended the idea to data structures, and first coined the term concolic testing. Another tool, called EGT (renamed to EXE and later improved and renamed to KLEE), based on similar ideas was independently developed by Cristian Cadar and Dawson Engler in 2005, and published in 2005 and 2006. PathCrawler first proposed to perform symbolic execution along a concrete execution path, but unlike concolic testing PathCrawler does not simplify complex symbolic constraints using concrete values. These tools (DART and CUTE, EXE) applied concolic testing to unit testing of C programs and concolic testing was originally conceived as a white box improvement upon established random testing methodologies. The technique was later generalized to testing multithreaded Java programs with , and unit testing programs from their executable codes (tool OSMOSE). It was also combined with fuzz testing and extended to detect exploitable security issues in large-scale x86 binaries by Microsoft Research's SAGE.
The concolic approach is also applicable to model checking. In a concolic model checker, the model checker traverses states of the model representing the software being checked, while storing both a concrete state and a symbolic state. The symbolic state is used for checking properties on the software, while the concrete state is used to avoid reaching unreachable states. One such tool is ExpliSAT by Sharon Barner, Cindy Eisner, Ziv Glazberg, Daniel Kroening and Ishai Rabinovitz
Birth of concolic testing
Implementation of traditional symbolic execution based testing requires the implementation of a full-fledged symbolic interpreter for a programming language. Concolic testing implementors noticed that implementation of full-fledged symbolic execution can be avoided if symbolic execution can be piggy-backed with the normal execution of a program through instrumentation. This idea of simplifying implementation of symbolic execution gave birth to concolic testing.
Development of SMT solvers
An important reason for the rise of concolic testing (and more generally, symbolic-execution based analysis of programs) in the decade since it was introduced in 2005 is the dramatic improvement in the efficiency and expressive power of SMT Solvers. The key technical developments that lead to the rapid development of SMT solvers include combination of theories, lazy solving, DPLL(T) and the huge improvements in the speed of SAT solvers. SMT solvers that are particularly tuned for concolic testing include Z3, STP, Z3str2, and Boolector.
Example
Consider the following simple example, written in C:
void f(int x, int y) {
int z = 2*y;
if (x == 100000) {
if (x < z) {
assert(0); /* error */
}
}
}
Simple random testing, trying random values of x and y, would require an impractically large number of tests to reproduce the failure.
We begin with an arbitrary choice for x and y, for example x = y = 1. In the concrete execution, line 2 sets z to 2, and the test in line 3 fails since 1 ≠ 100000. Concurrently, the symbolic execution follows the same path but treats x and y as symbolic variables. It sets z to the expression 2y and notes that, because the test in line 3 failed, x ≠ 100000. This inequality is called a path condition and must be true for all executions following the same execution path as the current one.
Since we'd like the program to follow a different execution path on the next run, we take the last path condition encountered, x ≠ 100000, and negate it, giving x = 100000. An automated theorem prover is then invoked to find values for the input variables x and y given the complete set of symbolic variable values and path conditions constructed during symbolic execution. In this case, a valid response from the theorem prover might be x = 100000, y = 0.
Running the program on this input allows it to reach the inner branch on line 4, which is not taken since 100000 (x) is not less than 0 (z = 2y). The path conditions are x = 100000 and x ≥ z. The latter is negated, giving x < z. The theorem prover then looks for x, y satisfying x = 100000, x < z, and z = 2y; for example, x = 100000, y = 50001. This input reaches the error.
Algorithm
Essentially, a concolic testing algorithm operates as follows:
Classify a particular set of variables as input variables. These variables will be treated as symbolic variables during symbolic execution. All other variables will be treated as concrete values.
Instrument the program so that each operation which may affect a symbolic variable value or a path condition is logged to a trace file, as well as any error that occurs.
Choose an arbitrary input to begin with.
Execute the program.
Symbolically re-execute the program on the trace, generating a set of symbolic constraints (including path conditions).
Negate the last path condition not already negated in order to visit a new execution path. If there is no such path condition, the algorithm terminates.
Invoke an automated satisfiability solver on the new set of path conditions to generate a new input. If there is no input satisfying the constraints, return to step 6 to try the next execution path.
Return to step 4.
There are a few complications to the above procedure:
The algorithm performs a depth-first search over an implicit tree of possible execution paths. In practice programs may have very large or infinite path trees – a common example is testing data structures that have an unbounded size or length. To prevent spending too much time on one small area of the program, the search may be depth-limited (bounded).
Symbolic execution and automated theorem provers have limitations on the classes of constraints they can represent and solve. For example, a theorem prover based on linear arithmetic will be unable to cope with the nonlinear path condition xy = 6. Any time that such constraints arise, the symbolic execution may substitute the current concrete value of one of the variables to simplify the problem. An important part of the design of a concolic testing system is selecting a symbolic representation precise enough to represent the constraints of interest.
Commercial success
Symbolic-execution based analysis and testing, in general, has witnessed a significant level of interest from industry. Perhaps the most famous commercial tool that uses dynamic symbolic execution (aka concolic testing) is the SAGE tool from Microsoft. The KLEE and S2E tools (both of which are open-source tools, and use the STP constraint solver) are widely used in many companies including Micro Focus Fortify, NVIDIA, and IBM. Increasingly these technologies are being used by many security companies and hackers alike to find security vulnerabilities.
Limitations
Concolic testing has a number of limitations:
If the program exhibits nondeterministic behavior, it may follow a different path than the intended one. This can lead to nontermination of the search and poor coverage.
Even in a deterministic program, a number of factors may lead to poor coverage, including imprecise symbolic representations, incomplete theorem proving, and failure to search the most fruitful portion of a large or infinite path tree.
Programs which thoroughly mix the state of their variables, such as cryptographic primitives, generate very large symbolic representations that cannot be solved in practice. For example, the condition if(sha256_hash(input) == 0x12345678) { ... } requires the theorem prover to invert SHA256, which is an open problem.
Tools
pathcrawler-online.com is a restricted version of the current PathCrawler tool which is publicly available as an online test-case server for evaluation and education purposes.
jCUTE is available as binary under a research-use only license by Urbana-Champaign for Java.
CREST is an open-source solution for C that replaced CUTE (modified BSD license).
KLEE is an open source solution built on-top of the LLVM infrastructure (UIUC license).
CATG is an open-source solution for Java (BSD license).
Jalangi is an open-source concolic testing and symbolic execution tool for JavaScript. Jalangi supports integers and strings.
Microsoft Pex, developed at Microsoft Rise, is publicly available as a Microsoft Visual Studio 2010 Power Tool for the NET Framework.
Triton is an open-source concolic execution library for binary code.
CutEr is an open-source concolic testing tool for the Erlang functional programming language.
Many tools, notably DART and SAGE, have not been made available to the public at large. Note however that for instance SAGE is "used daily" for internal security testing at Microsoft.
References
Automated theorem proving
Software testing | Concolic testing | [
"Mathematics",
"Engineering"
] | 2,021 | [
"Software testing",
"Automated theorem proving",
"Mathematical logic",
"Computational mathematics",
"Software engineering"
] |
25,034,192 | https://en.wikipedia.org/wiki/Chemical%20reaction%20engineering | Chemical reaction engineering (reaction engineering or reactor engineering) is a specialty in chemical engineering or industrial chemistry dealing with chemical reactors. Frequently the term relates specifically to catalytic reaction systems where either a homogeneous or heterogeneous catalyst is present in the reactor. Sometimes a reactor per se is not present by itself, but rather is integrated into a process, for example in reactive separations vessels, retorts, certain fuel cells, and photocatalytic surfaces. The issue of solvent effects on reaction kinetics is also considered as an integral part.
Origin of chemical reaction engineering
Chemical reaction engineering as a discipline started in the early 1950s under the impulse of researchers at the Shell Amsterdam research center and the university of Delft. The term chemical reaction engineering was apparently coined by J.C. Vlugter while preparing the 1st European Symposium on Chemical Reaction Engineering which was held in Amsterdam in 1957.
Discipline
Chemical reaction engineering aims at studying and optimizing chemical reactions in order to define the best reactor design. Hence, the interactions of flow phenomena, mass transfer, heat transfer, and reaction kinetics are of prime importance in order to relate reactor performance to feed composition and operating conditions. Although originally applied to the petroleum and petrochemical industries, its general methodology combining reaction chemistry and chemical engineering concepts allows optimization of a variety of systems where modeling or engineering of reactions is needed. Chemical reaction engineering approaches are indeed tailored for the development of new processes and the improvement of existing technologies.
Books
The Engineering of Chemical Reactions (2nd Edition), Lanny Schmidt, 2004, Oxford University Press,
Chemical Reaction Engineering (3rd Edition), Octave Levenspiel, 1999, John Wiley & Sons, ,
Elements of Chemical Reaction Engineering (4th Edition), H. Scott Fogler, 2005, Prentice Hall, ,
Chemical Reactor Analysis and Design (2nd Edition), Gilbert F. Froment and Kenneth B. Bischoff, 1990, John Wiley & Sons, ,
Fundamentals of Chemical Reaction Engineering (1st Edition), Mark E. Davis and Robert J. Davis, 2003, The McGraw-Hill Companies, Inc., ,
ISCRE Symposia
The most important series of symposia are the International Symposia on Chemical Reaction Engineering or ISCRE conferences. These three-day conferences are held every two years, rotating among sites in North America, Europe, and the Asia-Pacific region, on a six-year cycle. These conferences bring together for three days distinguished international researchers in reaction engineering, prominent industrial practitioners, and new researchers and students of this multifaceted field. ISCRE symposia are a unique gathering place for reaction engineers where research gains are consolidated and new frontiers explored. The state of the art of various sub-disciplines of reaction engineering is reviewed in a timely manner, and new research initiatives are discussed.
Awards in Chemical Reaction Engineering
The ISCRE Board administers two premiere awards in chemical reaction engineering for senior and junior researchers every three years.
Neal R. Amundson Award for Excellence in Chemical Reaction Engineering
In 1996, the ISCRE Board of Directors established the Neal R. Amundson Award for Excellence in Chemical Reaction Engineering. This award recognizes a pioneer in the field of Chemical Reaction Engineering who has exerted a major influence on the theory or practice of the field, through originality, creativity, and novelty of concept or application. The award is made every three years at an ISCRE meeting, and consists of a Plaque and a check in the amount of $5000. The Amundson Award is generously supported by a grant from the ExxonMobil Corporation. Winners of the award include:
1996: Neal Amundson, Professor - University of Minnesota, University of Houston
1998: Rutherford Aris, Professor - University of Minnesota
2001: Octave Levenspiel, Professor - Oregon State University
2004: Vern Weekman, Mobil
2007: Gilbert Froment, Professor - Ghent University, Texas A&M University
2010: Dan Luss, Professor - University of Houston
2013: Lanny Schmidt, Professor - University of Minnesota
2016: Milorad P. Dudukovic, Professor - Washington University
2019: W. Harmon Ray, Professor - University of Wisconsin
2022: Announced at NASCRE-5
Rutherford Aris Young Investigator Award in Chemical Reaction Engineering
In 2016, the ISCRE, Inc. Board of Directors will bestow the first Rutherford Aris Young Investigator Award for Excellence in Chemical Reaction Engineering. This award will recognize outstanding contributions in experimental and/or theoretical reaction engineering research of investigators in early stages of their career. The recipient must be less than 40 years of age at the end of the calendar year in which the award is presented. The Aris Award is generously supported by a grant from the UOP, L.L.C., a Honeywell Company. The award consists of a plaque, an honorarium of $3000, and up to $2000 in travel funds to present at an ISCRE/NASCRE conference and to present a lecture at UOP. This award complements ISCRE's other major honor, the Neal R. Amundson Award. Winners of the award include:
2016: Paul J. Dauenhauer, Professor - University of Minnesota, USA
2019: Yuriy Roman-Leschkov, Professor, MIT, USA.
2022: Announced at NASCRE-5
See also
References
External links
ISCRE web site
Chemical reactors
Chemical engineering | Chemical reaction engineering | [
"Chemistry",
"Engineering"
] | 1,090 | [
"Chemical reaction engineering",
"Chemical reactors",
"Chemical engineering",
"Chemical equipment",
"nan"
] |
25,035,866 | https://en.wikipedia.org/wiki/Surge%20control | Surge control is the use of different techniques and equipment in a hydraulic system to prevent any excessive gain in pressure (also known as a pressure surge) that would cause the hydraulic process pressure to exceed the maximum working pressure of the mechanical equipment used in the system.
What is hydraulic surge
Hydraulic surges are created when the velocity of a fluid suddenly changes and becomes unsteady or transient. Fluctuations in the fluid's velocity are generated by restrictions like a pump starting/stopping, a valve opening/closing, or a reduction in line size. Hydraulic surges can be generated within a matter of seconds anywhere that the fluid velocity changes and can travel through a pipeline at very high speed, damaging equipment or causing piping failures from over-pressurizing. Surge relief systems absorb and limit high-pressure surges, preventing the pressure surge from traveling through the hydraulic system. Methods for controlling hydraulic surges include utilizing a gas-loaded surge relief valve, spring-loaded pressure safety valves, pilot-operated valves, surge suppressors, and rupture disks.
Typical applications
Surge control products have been used in many industries to protect the maximum working pressure of hydraulic system for decades. Typical applications for surge relief equipment is in pipelines at pump stations, receiving manifolds at storage facilities, back pressure control, marine loading/off loading, site specific applications where pressure surges are generated by the automation system, or any location deemed critical by an engineering firm performing a surge analysis.
Surge suppressors
Surge suppressors perform surge relief by acting as a pulsation dampener. Most suppressors have a metal tank with an internal elastic bladder in it. Within the tank they pressurize the top of the bladder with a compressed gas while the product comes in the bottom of the pressure vessel. The gas in the bladder is supplying the system with its set point. During normal operation, as the process conditions begins to build pressure; the internal bladder contracts from the pressure gain allowing liquid to move into the surge suppressor pressure vessel adding volume to the location. This increase in physical volume prevents the pressure from rising to dangerous levels.
Advantages:
Very fast speed of response.
Zero loss of product from the pipeline from a surge event.
Can be used as both a surge suppressor and for surge relief.
Disadvantages:
Limited capacity of volume for surge relief.
The surge suppressor must be as physically close as possible to the area where the surge is generated. Surge suppressors can become very large depending on line size.
Has a limited maximum working pressure.
Rupture discs
A rupture disc, also known as a burst disc, bursting disc, or burst diaphragm, is a onetime use, non-resealing pressure relief device that, in most uses, protects a pressure vessel, equipment or system from over pressurization or potentially damaging vacuum conditions. A rupture disc is a sacrificial part because it has a one-time-use membrane that fails at a predetermined differential pressure, either positive or vacuum. The membrane is usually made out of metal, but nearly any material can be used to suit a particular application. Rupture discs provide instant response (within milliseconds) to an increase or decrease in system pressure, but once the disc has ruptured it will not reseal. Due to the one time usage of this disc it requires someone to replace the plate once it has ruptured. One time usage devices are initially cost-effective, but can become time-consuming and labor-intensive to repeatedly change out.
Advantages:
Isolates equipment from the process conditions, protecting the equipment until it is needed for a surge relief event.
Cost effective installation.
Very fast response time.
Disadvantages:
One time use.
Requires down time to replace.
A rupture disk has only one set point.
Uncontrollable release of large amounts of harmful substances.
Spring-loaded pressure safety valves
Spring-loaded pressure safety valves use a compressed spring to hold the valve closed. The valve will remain closed until the process pressure exceeds the set point of the spring pressure. The valve will open 100% when the set point is reached and will remain open until a certain blow down factor is reached. Oftentimes the blow down is a percentage of the set point, such as 20% of the set point. That means that the valve will remain open until the process pressure decreases to 20% below the set point of the spring-loaded relief valve.
Advantages:
Opens 100% when set point is reached.
Easy to install and maintain.
High flow capacity or Cv value in gas service.
Disadvantages:
Has a blow down factor inherent to the design of the valve.
The spring takes a set, making the set point drift over time.
May release product to atmosphere.
Surge relief valves
Surge relief valves are known for their quick speed of response, excellent flow characteristics, and durability in high pressure applications. Surge relief valves are designed to have an adjustable set point that is directly related to the max pressure of the pipeline/system. When the product on the inlet of the valve exceeds the set point it forces the valve to open and allows the excess surge to be bled out in to a breakout tank or recirculated into a different pipeline. So in the event of the surge, the majority of the pressure is absorbed in the liquid and pipe, and just that quantity of liquid which is necessary to relieve pressures of unsafe proportions is discharged to the surge relief tank. Some valve manufactures use the piston style with a nitrogen control system and external plenums, while others use elastomeric tubes, external pilots, or internal chambers.
Pilot operated valves
Pilot operated surge relief valves are typically used to protect pipelines that move low viscosity products like gasoline or diesel. This style of valve is installed downstream of the pump/valve that creates the surge. The valve is controlled by an external, normally closed pilot valve. The pilot will be set to the desired set point of the system, with a sense line that runs up stream of the valve. When the upstream process conditions start to exceed the pilot set point, the valve begins to open and relieve the excess pressure until the correct pressure is met causing the valve to close.
Advantages:
Does not require power.
Adjustable set point.
High flow capacity or Cv value.
Disadvantages:
Slower speed of response.
Cannot be used in high viscosity applications.
Pilot is sensitive to any type of particulate in the control loop.
Gas loaded surge relief valves
Piston-style gas-loaded surge relief valves operate on the balanced piston design and can be used in a variety of applications because it can handle high and low viscosity products while maintaining a fast speed of response. An inert gas, most commonly nitrogen, is loaded on the back side of the piston forcing the valve closed. The nitrogen pressure on the back side of the piston is actually what determines the valves set point. These valves will remain closed until the inlet pressure exceeds the set point/nitrogen pressure, at which time the valve will open from the high pressure and remain open as long as the process pressure is above the nitrogen pressure. Once the process pressure starts to decay, the valve will start to close. Once the process pressure is below the nitrogen pressure, the valve will go closed again.
Advantages:
Fast speed of response with soft closure to prevent generating a second surge event.
Can be used on high viscosity products such as crude oil.
Good flow characteristics (Cv).
No blowout, resets at the set point.
Disadvantages:
Only as repeatable as the system controlling the nitrogen pressure.
Performance is greatly impacted by any restrictions in the gas line between the relief valve and the plenum tank.
Many manufacturers recommend burring the plenum for temperature stability.
Rubber boot-style gas-loaded relief valve
Rubber boot-style gas-loaded relief valves operate by using nitrogen pressure loaded on the outside diameter of a rubber boot that is covering the flow path through the relief valve. As long as the process pressure is below the nitrogen pressure, the valve is closed. As soon as the process pressure raises above the nitrogen pressure, the product in the line forces the rubber boot away from the barrier and allows product to pass through the valve. When the process pressure decreases below the nitrogen pressure, the valve goes closed again.
Advantages:
There are many types of rubber elastomers for many different types of products.
Fast speed of response when the rubber boot isn't cold.
Achieves positive seal even when there is minor particulate in the line.
Disadvantages:
Rubber boot is greatly affected by temperature, the lower the temperature the less repeatable the relief valve set point.
Poor flow characteristics (Low Cv) require larger valves to achieve the desired flow rates.
Replacing the rubber boot requires the valve be removed from the line to be disassembled.
Current generation valves have metal internals and do not use older generation Rubber Boot.
Controlling surge relief valves
There are many different approaches to controlling surge relief equipment. It all starts with the technology used in the specific application. Spring-loaded pressure safety valves and pilot-operated valves are controlled mechanically using the pressure from a compressed spring. Typically there is an adjustment stem that allows for minor adjustments on the set point by compressing or decompressing the spring. This design is limited by the pressure that can be generated by the spring in the valve.
Gas-loaded relief valves are controlled by the nitrogen pressure loaded into the relief valve. If there is no control on the nitrogen pressure, then the nitrogen gas will expand and contract with the changing ambient temperature. As the nitrogen pressure drifts with the temperature so does the set point of the relief valve.
The nitrogen pressure has traditionally been controlled using mechanical regulators. Regulators are designed to operate under flowing conditions. When used in the closed end plenum system of a surge relief valve, it must also perform an on/off function to correct for thermal expansion and contraction. Being a pressure control device designed for use under flowing conditions, it is not well suited to perform the on/off function needed in a closed-end system such as a surge relief valve plenum.
Another common issue is that regulators are required to operate outside of their design limits when making the corrections needed for thermal expansion and contraction. The volume of gas required to be added or vented from the system is so small that the regulator is required to operate below the minimum threshold of its performance curve. As a result, inconsistent corrections are made to the system pressure which impact the gas-loaded relief valve's set point.
A highly accurate and reliable approach to controlling the nitrogen pressure on a gas-loaded surge relief valve is to use an electronic control system to add and vent nitrogen pressure from the gas-loaded surge relief valve. This technique assures the required set point accuracy and repeatability needed in this critical application.
See also
Surge tank
Pressure relief valve
Safety relief valve
References
Hydraulics
Plumbing | Surge control | [
"Physics",
"Chemistry",
"Engineering"
] | 2,209 | [
"Plumbing",
"Physical systems",
"Construction",
"Hydraulics",
"Fluid dynamics"
] |
25,038,967 | https://en.wikipedia.org/wiki/Gray%27s%20paradox | Gray's Paradox is a paradox posed in 1936 by British zoologist Sir James Gray. The paradox was to figure out how dolphins can obtain such high speeds and accelerations with what appears to be a small muscle mass. Gray made an estimate of the power a dolphin could exert based on its physiology, and concluded the power was insufficient to overcome the drag forces in water. He hypothesized that Dolphin's skin must have special anti-drag properties.
In 2008, researchers from Rensselaer Polytechnic Institute, West Chester University and the University of California, Santa Cruz used digital particle image velocimetry to prove that Gray's assumptions oversimplified the relationship between muscle power and drag force.
Timothy Wei, professor and acting dean of Rensselaer's School of Engineering, videotaped two bottlenose dolphins, Primo and Puka, as they swam through a section of water populated with hundreds of thousands of tiny air bubbles. Computer software and force measurement tools developed for aerospace were then used to study the particle-image velocimetry which was captured at 1,000 frames per second (fps). This allowed the team to measure the force exerted by a dolphin. Results showed the dolphin to exert approximately 200 lb of force every time it thrust its tail – 10 times more than Gray hypothesized – and at peak force can exert between 300 and 400 lb.
Wei also used this technique to film dolphins as they were doing tail-stands, a trick where the dolphins “walk” on water by holding most of their bodies vertical above the water while supporting themselves with short, powerful thrusts of their tails.
In 2009, researchers from the National Chung Hsing University in Taiwan introduced new concepts of “kidnapped airfoils” and “circulating horsepower” to explain the swimming capabilities of the swordfish. Swordfish swim at even higher speeds and accelerations than dolphins. The researchers claim their analysis also "solves the perplexity of dolphin’s Gray paradox".
Gray's flawed assumption
The prior research efforts to refute Gray's paradox only looked at the drag reducing aspect of dolphin's skin, but never questioned the basic assumption of Gray "that drag cannot be greater than muscle work" which led to paradox in the first place. In 2014, a team of theoretical mechanical engineers from Northwestern University proved the underlying hypothesis of Gray's paradox wrong. They showed mathematically that drag on undulatory swimmers (such as dolphins) can indeed be greater than the muscle power it generates to propel itself forward, without being paradoxical. They introduced the concept of "energy cascade" to show that during steady swimming all of the generated muscle power is dissipated in the wake of the swimmer (through viscous dissipation). A swimmer uses muscle power to undulate its body, which causes it to experience both drag and thrust simultaneously. Muscle power generated should be equated to power needed to deform the body, rather than equating it to the drag power. On the contrary drag power should be equated to thrust power. This is because during steady swimming, drag and thrust are equal in magnitude but opposite in direction. Their findings can be summarized in a simple power balance equation:
in which,
.
It is important to acknowledge the fact that a swimmer does not have to spend energy to overcome drag all through its muscle work; it is also assisted by the thrust force in this task. Their research also shows that defining drag on the body is definitional and many definitions of drag on the swimming body are prevalent in literature. Some of these definitions can give higher value than the muscle power. However, this does not lead to any paradox because higher drag also means higher thrust in the power balance equation, and this does not violate any energy balance principles.
References
Notes
Fish, Frank (2005) A porpoise for power The Journal of Experimental Biology, Classics, 208: 977–978.
External links
Gray's Paradox on Science Daily
Dolphins
Zoology
Biomechanics
Paradoxes | Gray's paradox | [
"Physics",
"Biology"
] | 820 | [
"Biomechanics",
"Mechanics",
"Zoology"
] |
36,274,757 | https://en.wikipedia.org/wiki/C11H8N2S | {{DISPLAYTITLE:C11H8N2S}}
The molecular formula C11H8N2S (molar mass: 200.26 g/mol) may refer to:
Camalexin (3-thiazole-2-yl-indole)
MTEP, or 3-((2-Methyl-4-thiazolyl)ethynyl)pyridine
Molecular formulas | C11H8N2S | [
"Physics",
"Chemistry"
] | 94 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,277,691 | https://en.wikipedia.org/wiki/Doffing%20cylinder | A doffing cylinder, also called doffing roller or commonly just doffer is a component used in textile mills to remove fiber from the main cylinder of a card, on which the fibers have been straightened and aligned. The main cylinder of the card will have one or two doffers that comb and remove the fiber. The doffer is set with pins that hold the fiber, which is then removed by a comb or knife and fed into the next stage of production. Doffers are also used in cotton pickers and other machinery that handle fiber.
Confusingly, the word doffer (meaning something that takes off, as in "doff your hat") is also used for mill workers whose job it is to remove full bobbins or pirns holding spun fiber and replace them with empty bobbins or pirns. In modern mills, a machine called a doffer may do this task.
Early years
Some people have given credit to Richard Arkwright for inventing the doffer, which was incorporated in his machine, but others consider that it was invented by James Hargreaves. The design was refined by Samuel Crompton shortly after 1785. Before the surface of the carding cylinder reaches the doffer it passes a "fancy roller",
which brushes and raises the fibers on the cylinder so they can be transferred to the doffer more easily. In a wool mill a doffer would move relatively slowly compared to the surface of the carding cylinder, picking up the fiber. The fiber would then be removed from the doffer by a comb.
Design improvements
At first, the card clothing for wool mills was made in the form of sheets, and when attached to the cylinder and to all the rollers including the last doffer there were gaps of about an inch between the sheets. This made it impossible to make endless slubbings. Even when it became possible to wrap the doffer with fillet clothing with no gaps, sheets with gaps continued to be used because a continuous woolen sliver was too difficult to manage through the subsequent steps.
A breakthrough was made with the ring doffer, where the surface was covered by alternating continuous rings of clothing about an inch wide, separated by spacing rings of some material like leather. The idea seems to have originated with Louis Martin in Europe in 1803, and may have been used by Arnold Pawtucket in 1812 in Rhode Island. Ezekial Hale of Haverhill, Massachusetts patented the idea in 1825. With this design, it became possible to produce continuous lengths of slubbing to feed into the next stage.
Various inventors proposed improvements. Thus, in October 1835 Stephen R. Parkhurst filed a patent for a doffer made of a set of parallel wheels with rims about four inches wide, separated by an inch or slightly more. By setting the wheels at a slight angle, the whole surface of the main cylinder would be cleared by them. This doffer would feed a system of rollers that could feed the fiber onto spools or into machines that would immediately twist it into a thread.
However, the ring doffer was relatively inefficient.
Modern doffers
The most common arrangement today uses a tape doffer completely wrapped in fillet clothing, producing a web the width of the card, which is then split into strips using an array of endless tapes. These tapes used to be made of leather, but today are usually of synthetic material.
References
Citations
Sources
Industrial machinery | Doffing cylinder | [
"Engineering"
] | 705 | [
"Industrial machinery"
] |
36,279,136 | https://en.wikipedia.org/wiki/Arsenic%20tetroxide | Arsenic tetroxide is an inorganic compound with the chemical formula As2O4, containing As(III) and As(V), AsIIIAsVO4.
Synthesis
It can be synthesized in an autoclave via the following reaction:
2 + → 2
Structure
It adopts a layer structure, and the coordination geometry of As(III) is triangular pyramid while As(V) is tetrahedral.
References
Arsenic compounds
Oxides
Mixed_valence_compounds | Arsenic tetroxide | [
"Chemistry"
] | 95 | [
"Mixed valence compounds",
"Inorganic compounds",
"Oxides",
"Inorganic compound stubs",
"Salts"
] |
36,280,269 | https://en.wikipedia.org/wiki/Double%20bubble%20theorem | In the mathematical theory of minimal surfaces, the double bubble theorem states that the shape that encloses and separates two given volumes and has the minimum possible surface area is a standard double bubble: three spherical surfaces meeting at angles of 120° on a common circle. The double bubble theorem was formulated and thought to be true in the 19th century, and became a "serious focus of research" by 1989, but was not proven until 2002.
The proof combines multiple ingredients. Compactness of rectifiable currents (a generalized definition of surfaces) shows that a solution exists. A symmetry argument proves that the solution must be a surface of revolution, and it can be further restricted to having a bounded number of smooth pieces. Jean Taylor proof of Plateau's laws describes how these pieces must be shaped and connected to each other, and a final case analysis shows that, among surfaces of revolution connected in this way, only the standard double bubble has locally-minimal area.
The double bubble theorem extends the isoperimetric inequality, according to which the minimum-perimeter enclosure of any area is a circle, and the minimum-surface-area enclosure of any single volume is a sphere. Analogous results on the optimal enclosure of two volumes generalize to weighted forms of surface energy, to Gaussian measure of surfaces, and to Euclidean spaces of any dimension.
Statement
According to the isoperimetric inequality, the minimum-perimeter enclosure of any area is a circle, and the minimum-surface-area enclosure of any single volume is a sphere. The existence of a shape with bounded surface area that encloses two volumes is obvious: just enclose them with two separate spheres. It is less obvious that there must exist some shape that encloses two volumes and has the minimum possible surface area: it might instead be the case that a sequence of shapes converges to a minimum (or to zero) without reaching it. This problem also raises tricky definitional issues: what is meant by a shape, the surface area of a shape, and the volume that it encloses, when such things may be non-smooth or even fractal? Nevertheless, it is possible to formulate the problem of optimal enclosures rigorously using the theory of rectifiable currents, and to prove using compactness in the space of rectifiable currents that every two volumes have a minimum-area enclosure.
Plateau's laws state that any minimum area piecewise-smooth shape that encloses any volume or set of volumes must take a form commonly seen in soap bubbles in which surfaces of constant mean curvature meet in threes, forming dihedral angles of 120° ( radians). In a standard double bubble, three patches of spheres meet at this angle along a shared circle. Two of these spherical surfaces form the outside boundary of the double bubble and a third one in the interior separates the two volumes from each other. In physical bubbles, the radii of the spheres are inversely proportional to the pressure differences between the volumes they separate, according to the Young–Laplace equation. This connection between pressure and radius is reflected mathematically in the fact that, for any standard double bubble, the three radii , , and of the three spherical surfaces obey the equation where is the smaller radius of the two outer bubbles. In the special case when the two volumes and two outer radii are equal, calculating the middle radius using this formula leads to a division by zero. In this case, the middle surface is instead a flat disk, which can be interpreted as a patch of an infinite-radius sphere. The double bubble theorem states that, for any two volumes, the standard double bubble is the minimum area shape that encloses them; no other set of surfaces encloses the same amount of space with less total area.
In the Euclidean plane, analogously, the minimum perimeter of a system of curves that enclose two given areas is formed by three circular arcs, with the same relation between their radii, meeting at the same angle of 120°. For two equal areas, the middle arc degenerates to a straight line segment. The three-dimensional standard double bubble can be seen as a surface of revolution of this two-dimensional double bubble. In any higher dimension, the optimal enclosure for two volumes is again formed by three patches of hyperspheres, meeting at the same 120° angle.
History
The three-dimensional isoperimetric inequality, according to which a sphere has the minimum surface area for its volume, was formulated by Archimedes but not proven rigorously until the 19th century, by Hermann Schwarz. In the 19th century, Joseph Plateau studied the double bubble, and the truth of the double bubble theorem was assumed without proof by C. V. Boys in the 1912 edition of his book on soap bubbles. Plateau formulated Plateau's laws, describing the shape and connections between smooth pieces of surfaces in compound soap bubbles; these were proven mathematically for minimum-volume enclosures by Jean Taylor in 1976.
By 1989, the double bubble problem had become a "serious focus of research". In 1991, Joel Foisy, an undergraduate student at Williams College, was the leader of a team of undergraduates that proved the two-dimensional analogue of the double bubble conjecture. In his undergraduate thesis, Foisy was the first to provide a precise statement of the three-dimensional double bubble conjecture, but he was unable to prove it.
A proof for the restricted case of the double bubble conjecture, for two equal volumes, was announced by Joel Hass and Roger Schlafly in 1995, and published in 2000. The proof of the full conjecture by Hutchings, Morgan, Ritoré, and Ros was announced in 2000 and published in 2002. After earlier work on the four-dimensional case, the full generalization to higher dimensions was published by Reichardt in 2008, and in 2014, Lawlor published an alternative proof of the double bubble theorem generalizing both to higher dimensions and to weighted forms of surface energy. Variations of the problem considering other measures of the size of the enclosing surface, such as its Gaussian measure, have also been studied.
Proof
A lemma of Brian White shows that the minimum area double bubble must be a surface of revolution. For, if not, one could use a similar argument to the ham sandwich theorem to find two orthogonal planes that bisect both volumes, replace surfaces in two of the four quadrants by the reflections of the surfaces in the other quadrants, and then smooth the singularities at the reflection planes, reducing the total area. Based on this lemma, Michael Hutchings was able to restrict the possible shapes of non-standard optimal double bubbles, to consist of layers of toroidal tubes.
Additionally, Hutchings showed that the number of toroids in a non-standard but minimizing double bubble could be bounded by a function of the two volumes. In particular, for two equal volumes, the only possible nonstandard double bubble consists of a single central bubble with a single toroid around its equator. Based on this simplification of the problem, Joel Hass and Roger Schlafly were able to reduce the proof of this case of the double bubble conjecture to a large computerized case analysis, taking 20 minutes on a 1995 personal computer. The eventual proof of the full double bubble conjecture also uses Hutchings' method to reduce the problem to a finite case analysis, but it avoids the use of computer calculations, and instead works by showing that all possible nonstandard double bubbles are unstable: they can be perturbed by arbitrarily small amounts to produce another surface with lower area. The perturbations needed to prove this result are a carefully chosen set of rotations. Because a surface of minimum area exists, and none of the other candidate surfaces have minimum area, the minimum-area surface can only be the standard double bubble.
Related problems
John M. Sullivan has conjectured that, for any dimension , the minimum enclosure of up to volumes (not necessarily equal) has the form of a stereographic projection of a simplex. In particular, in this case, all boundaries between bubbles would be patches of spheres. The special case of this conjecture for three bubbles in two dimensions has been proven; in this case, the three bubbles are formed by six circular arcs and straight line segments, meeting in the same combinatorial pattern as the edges of a tetrahedron. Frank Morgan called even the case of three volumes in three dimensions "inaccessible", but in 2022 a proof was announced of the three-volume case in all dimensions, and of additional partial results in higher dimensions. Numerical experiments have shown that for six or more volumes in three dimensions, some of the boundaries between bubbles may be non-spherical.
For an infinite number of equal areas in the plane, the minimum-length set of curves separating these areas is the hexagonal tiling, familiar from its use by bees to form honeycombs, and its optimality (the honeycomb conjecture) was proven by T. C. Hales in 2001. For the same problem in three dimensions, the optimal solution is not known; Lord Kelvin conjectured that it was given by a structure combinatorially equivalent to the bitruncated cubic honeycomb, but this conjecture was disproved by the discovery of the Weaire–Phelan structure, a partition of space into equal volume cells of two different shapes using a smaller average amount of surface area per cell.
Researchers have also studied the dynamics of physical processes by which pairs of bubbles coalesce into a double bubble. This topic relates to a more general topic in differential geometry of the dynamic behavior of curves and surfaces under different processes that change them continuously. For instance, the curve-shortening flow is a process in which curves in the plane move at a speed proportionally to their curvature. For two infinite regions separated by a line, with a third finite region between them, the curve-shortening flow on their boundaries (rescaled to preserve the area of the finite region) converges towards a limiting shape in the form of a degenerate double bubble: a vesica piscis along the line between the two unbounded regions.
References
External links
Minimal surfaces
Theorems
Bubbles (physics)
Conjectures that have been proved | Double bubble theorem | [
"Chemistry",
"Mathematics"
] | 2,104 | [
"Mathematical theorems",
"Bubbles (physics)",
"Foams",
"Conjectures that have been proved",
"Mathematical problems",
"Minimal surfaces",
"Fluid dynamics"
] |
39,176,314 | https://en.wikipedia.org/wiki/Elias%20Gyftopoulos | Elias Panayiotis Gyftopoulos (; July 4, 1927June 23, 2012) was a Greek-American engineer who contributed to thermodynamics both in its general formulation and its quantum foundations.
Gyftopoulos received an undergraduate degree in mechanical and electrical engineering in 1953 at the National Technical University of Athens, and a Doctor of Science degree in electrical engineering at the Massachusetts Institute of Technology in 1958. At MIT, he initially focused on nuclear reactor safety and control. After meeting professors George N. Hatsopoulos and Joseph H. Keenan, his interests moved towards thermodynamics, in an attempt to give a consistent and rigorous exposition, free of the logical flaws and the limitations commonly associated with this discipline: his contribution culminated with reference textbook which completely reformulates the foundations of the subject, offering a general non-statistical definition of entropy applicable to both macroscopic and microscopic systems, both in equilibrium and in non-equilibrium states, and providing strong background and deep understanding of many applications in energy engineering for modern graduate curricula. His research also pioneered the subject of quantum thermodynamics with an early effort to give a quantum basis to thermodynamics by means a physical theory unifying mechanics and thermodynamics.
Works
ISBN 9780486439327
Elias P. Gyftopoulos complete collection of published scientific works
References
External links
Elias P. Gyftopoulos collected works and memorial tribute
Thermodynamicists
Greek emigrants to the United States
MIT School of Engineering faculty
National Technical University of Athens alumni
MIT School of Engineering alumni
1927 births
2012 deaths
People from Athens | Elias Gyftopoulos | [
"Physics",
"Chemistry"
] | 335 | [
"Thermodynamics",
"Thermodynamicists"
] |
39,177,096 | https://en.wikipedia.org/wiki/Mahler%27s%203/2%20problem | In mathematics, Mahler's 3/2 problem concerns the existence of "-numbers".
A -number is a real number such that the fractional parts of
are less than for all positive integers . Kurt Mahler conjectured in 1968 that there are no -numbers.
More generally, for a real number , define as
Mahler's conjecture would thus imply that exceeds . Flatto, Lagarias, and Pollington showed that
for rational in lowest terms.
References
Analytic number theory
Conjectures
Diophantine approximation | Mahler's 3/2 problem | [
"Mathematics"
] | 108 | [
"Analytic number theory",
"Unsolved problems in mathematics",
"Diophantine approximation",
"Conjectures",
"Mathematical relations",
"Mathematical problems",
"Approximations",
"Number theory"
] |
39,179,456 | https://en.wikipedia.org/wiki/Magnetic%20spin%20vortex%20disc | Magnetic material synthesis and characterization technology continue to improve, allowing for the production of various shapes, sizes, and compositions of magnetic material to be studied and tuned for improved properties. One of the places which has seen great advancement is in the synthesis of magnetic materials at nanometer length scales. Nanoparticle research has seen a great deal of interest in a number of fields as many phenomena can be explained by what is occurring on the nanoscale, which can be probed more effectively using nanometer sized materials. One unique type of materials which have seen a recent surge in research interest have been known as "nanoflakes" where they resemble flakes or discs of nanometer thickness and micrometer dimensions. Nanomaterials of this shape have seen use in a number of fields including energy storage, as [electrodes] of electrochemical cells, and in cancer therapy to kill cancer cells.
References
D.-H. Kim, E. A. Rozhkova, I. V Ulasov, S. D. Bader, T. Rajh, M. S. Lesniak, and V. Novosad, “Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction.,” Nature materials, vol. 9, no. 2, pp. 165–71, Feb. 2010.
R. P. Cowburn, D. K. Koltsov, a. O. Adeyeye, and M. E. Welland, “Single-Domain Circular Nanomagnets,” Physical Review Letters, vol. 83, no. 5, pp. 1042–1045, Aug. 1999.
S. Jain, V. Novosad, F.Y. Fradin, J.E. Pearson, V. Tiberkevich, A.N. Slavin, S.D. Bader, "From chaos to selective ordering of vortex cores in interacting mesomagnets", Nature Communications, Vol. 3, no.1330 DOI: doi:10.1038/ncomms2331 (2012)
Valentyn Novosad and Elena A. Rozhkova. "Ferromagnets-based multifunctional nanoplatform for targeted cancer therapy" Biomedical Engineering, Trends in Materials Science. Chapter 18.
Buchanan, K. S., Roy, P. E., Grimsditch, M., Fradin, F. Y., Guslienko, K. Yu., Bader, S. D., and Novosad, V. Soliton pair dynamics in patterned ferromagnetic ellipses. Nature Physics 1, 172-176 (2005).
Xiaobin Zhu, Vitali Metlushko, Bojan Ilic, and Peter Grutter."Direct observation of Magnetostatic Coupling of Chain Arrays of Magnetic Disks" IEEE Transactions on Magnetics. Vol. 39. no. 5, September 2003.
E.A. Rozhkova, V. Novosad, D.H. Kim, J. Pearson, and R Divan."Ferromagnetic microdisks as carriers for biomedical applications" J. Applied Physics. 105. 2009.
Mi-Young Im, Peter Fischer, Keisuke Yamada, Tomonori Sato, Shinya Kasai, Yoshinobu Nakatani & Teruo Ono “Symmetry breaking in the formation of magnetic vortex states in permalloy nanodisk “ Nature Communications. (2012). 3. 983
T. Shinjo, T. Okuno, R. Hassdorf, K. Shigeto, T. Ono, “Magnetic Vortex Core Observation in Circular Dots of Permalloy ” Science, vol. 289. no. 5481, pp. 930 – 932 (2000)
A Wachowiak, J. Wiebe, M. Bode, O. Pietzsch, M. Morgenstern, and R. Wiesendanger, “Direct observation of internal spin structure of magnetic vortex cores.,” Science, vol. 298, no. 5593, pp. 577–80, Oct. 2002.
J. Raabe, R. Pulwey, R. Sattler, T. Schweinböck, J. Zweck, and D. Weiss, “Magnetization pattern of ferromagnetic nanodisks,” Journal of Applied Physics, vol. 88, no. 7, p. 4437, 2000.
Nanoparticles
Nanoelectronics | Magnetic spin vortex disc | [
"Materials_science"
] | 940 | [
"Nanotechnology",
"Nanoelectronics"
] |
39,181,795 | https://en.wikipedia.org/wiki/White%E2%80%93Chen%20catalyst | The White–Chen catalyst is an Iron-based coordination complex named after Professor M. Christina White and her graduate student Mark S. Chen. The catalyst is used along with hydrogen peroxide and acetic acid additive to oxidize aliphatic sp3 C-H bonds in organic synthesis. The catalyst is the first to allow for preparative and predictable aliphatic C–H oxidations over a broad range of organic substrates. Oxidations with the catalyst have proven to be remarkably predictable based on sterics, electronics, and stereoelectronics allowing for aliphatic C–H bonds to be thought of as a functional group in the streamlining of organic synthesis.
Electronic selectivity
In the case where an electron withdrawing group (EWG) is present in the substrate the highly electrophilic catalyst will oxidize the more electron-rich C–H bond that is most remote from the EWG. In the case above the C–H bond shaded yellow is further from the electron withdrawing group and therefore has a higher electron density than the one not shaded yellow. The yellow shaded C–H bond is therefore the primary site for oxidation by the catalyst.
Example of Electronic Selectivity
The reaction selectivity is highly influenced by electronics due to the highly electron withdrawing ester group present in the substrate. For that reason the reaction proceeds with oxidation at the tertiary C–H bond most remote from the ester with reaction yields of greater than 50% and selectivities of >99:1. Electronically guided site-selectivity is also observed for secondary sites, affording yields of mono-oxidized products of 50% or greater.
Steric selectivity
In the case where a bulky group, denoted at the right by BG, is present near multiple aliphatic C-H groups that are electronically equivalent, the bulky White–Chen catalyst will target the less sterically hindered C-H bond. In the case at the right there is a large bulky group in close proximity to one of two aliphatic C-H bonds. The C-H bond shaded in yellow is further from the bulky group, is less sterically hindered, and will therefore be the site of oxidation by the catalyst in this case. The bulky nature of the White–Chen catalyst allows product predictability based on steric properties of the starting materials given that electronic properties are equivalent.
Example of steric selectivity
In the above case there are two electronically favored C–H sites in the substrate, both highlighted in yellow (Carbons 1 and 8), for oxidation with the White–Chen catalyst. Due to the large steric hindrance at Carbon 8 the reaction proceeds by primarily oxidizing Carbon 1. The reaction proceeds with 50% yield and with a selectivity of 11:1 for C-1 vs. C-8.
Stereoelectronic Selectivity
In the case where there is an electron activating group (EAG) near aliphatic C–H bonds, the EAG will donate electron density to the adjacent C–H bonds to promote oxidation at that site. In the case at the right the cyclopropane EAG activates the aliphatic C–H bonds adjacent to it via hyperconjugation (oxygen and nitrogen heteroatoms can also do this). The bonds highlighted in red will undergo oxidation preferentially to other C—H bonds in the molecule. The observed product is shown on the right and consists of a carbonyl in place of the two aliphatic C–H bonds. Other oxidants such as TFDO and DMDO have also been shown to oxidize aliphatic C—H bonds selectively via stereoelectronic activation.
Example of stereoelectronic selectivity
The above reaction is an example of the stereoelectronic selectivity-based strain relief (alleviation of 1,3-diaxial interactions, torsional strain, etc.). In this case torsional strain between the bulky tert-butyl group and the adjacent methylenes is relieved upon oxidation at C3. The carbon highlighted in yellow is the main site of oxidation yielding the product at the right with a reaction yield of 59%. The minor oxidation product at C4 was formed in 12%.
Directing group selectivity
In the case where a directing group, denoted at the right by DG, is present near multiple aliphatic C-H groups the catalyst will oxidize the C-H bond closest to the directing group. Thus far, only carboxylic acid functional groups have been shown to be effective and when present, the AcOH additive does not need to be added to the reaction. In the case at the right the C–H bond highlighted in yellow is in close proximity to the directing group and would be the expected site for oxidation in this substrate.
Example of directing group selectivity
The above figure shows the effects of a carboxylic acid directing group on the overall reactivity of the reaction. The substrate on the left is the starting material and when the R group is an ester the oxidation takes place with low yields of about 26% due to electronic deactivation of the proximal C—H bond. When the R group is a carboxylic acid the yield increases to 50%. In general the White Chen catalyst will bind to a carboxylic acid groups and this can be used to override negative electronic, steric, and stereoelectronic effects within a molecule.
Example of directing group selectivity
The above figure shows a carboxylic acid directed diastereoselective lactonization of tetrahydrogibberellic acid via a C-H oxidation to form a butyrolactone derivative.
Combined effects in complex molecules
The above figure shows oxidation at the least sterically hindered and electronically/stereo-electronically activated site. The White–Chen catalyst relies on the constructive combination of inherent electronic, steric, and stereoelectronic factors within a substrate to favor one site of oxidation. When these factors combine productively, the reaction proceeds with great predictability and an isolated yields of 50% or greater.
Mechanism
The above figure illustrates the proposed mechanism for oxidation by the White–Chen catalyst. The reaction proceeds in conjunction with hydrogen peroxide oxidant and acetic acid. The catalyst quickly reacts with hydrogen peroxide, believed to be catalyzed by the acetic acid, and forms and iron-oxo intermediate. The iron-oxo intermediate is then thought to abstract a hydrogen on the carbon to be oxidized to generate a short-lived carbon centered radical. The iron-hydroxide then quickly rebounds onto the carbon centered radical to form the oxidized compound. Recently, direct evidence was obtained for a carbon-centered radical intermediate in this reaction using a Taxane-based radical trap. It is important to note that these reactions (both directed and non-directed) proceed with complete retention of stereochemistry, as is often observed in reactions where radical rebound is rapid enough that epimerization does not happen.
References
External links
Catalysts
Iron complexes | White–Chen catalyst | [
"Chemistry"
] | 1,459 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
39,186,043 | https://en.wikipedia.org/wiki/BIM%20Task%20Group | The Building Information Modelling (BIM) Task Group was a UK Government-funded group, managed through the Cabinet Office, created in 2011, and superseded in 2017 by the Centre for Digital Built Britain.
History
Holding its first meeting in May 2011 and chaired by Mark Bew, the BIM Task Group was founded to "drive adoption of BIM across government" in support of the Government Construction Strategy. It aimed to strengthen the public sector's capabilities in BIM implementation to that all central government departments could adopt, as a minimum, collaborative 'Level 2' BIM by 2016.
The core BIM task force, to which companies seconded employees, identified four work streams, each led by a core team member: stakeholder and media engagement, delivery and productivity, commercial and legal, and training and academia. Working parties were established to focus on particular areas including: training and education, COBie data set requirements, Plan of Works, software vendors (the BIM Technologies Alliance), contractors (UK Contractors Group, now superseded by Build UK), and materials and products suppliers (Construction Products Association).
In early 2014, it was announced that the BIM Task Group would be wound down during 2015, with a "managed handover" during 2015 to a newly created "legacy group", though there was speculation that the group's life might be extended to help achieve a new BIM 'Level 3' target.
In October 2016, an updated BIM Task Group delivering the February 2015 Digital Built Britain strategy was announced at the Institution of Civil Engineers BIM 2016 Conference in a keynote speech by Mark Bew. The work of the BIM Task Group then continued under the stewardship of the Cambridge-based Centre for Digital Built Britain (CDBB), announced in December 2017 and formally launched in early 2018.
Since 2016, industry adoption of BIM has been led by the UK BIM Alliance, formed to champion and enable the implementation of BIM, and to connect and represent organisations, groups and individuals working towards digital transformation of the UK's built environment industry. The former BIM Technologies Alliance was reconstituted as a group managed by the UK BIM Alliance. In October 2019, CDBB, the UK BIM Alliance and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, integrating the international ISO 19650 series of standards into UK processes and practice.
References
Data modeling
Computer-aided design
Construction industry of the United Kingdom
Building engineering organizations
Building information modeling | BIM Task Group | [
"Engineering"
] | 523 | [
"Computer-aided design",
"Design engineering",
"Building engineering",
"Building engineering organizations",
"Building information modeling",
"Data modeling",
"Data engineering"
] |
49,421,690 | https://en.wikipedia.org/wiki/Thrombosis%20prevention | Thrombosis prevention or thromboprophylaxis is medical treatment to prevent the development of thrombosis (blood clots inside blood vessels) in those considered at risk for developing thrombosis. Some people are at a higher risk for the formation of blood clots than others, such as those with cancer undergoing a surgical procedure. Prevention measures or interventions are usually begun after surgery as the associated immobility will increase a person's risk.
Blood thinners are used to prevent clots, these blood thinners have different effectiveness and safety profiles. A 2018 systematic review found 20 studies that included 9771 people with cancer. The evidence did not identify any difference between the effects of different blood thinners on death, developing a clot, or bleeding. A 2021 review found that low molecular weight heparin (LMWH) was superior to unfractionated heparin in the initial treatment of venous thromboembolism for people with cancer.
There are medication-based interventions and non-medication-based interventions. The risk of developing blood clots can be lowered by life style modifications, the discontinuation of oral contraceptives, and weight loss. In those at high risk both interventions are often used. The treatments to prevent the formation of blood clots is balanced against the risk of bleeding.
One of the goals of blood clot prevention is to limit venous stasis as this is a significant risk factor for forming blood clots in the deep veins of the legs. Venous stasis can occur during the long periods of not moving. Thrombosis prevention is also recommended during air travel. Thrombosis prophylaxis is effective in preventing the formation of blood clots, their lodging in the veins, and their developing into thromboemboli that can travel through the circulatory system to cause blockage and subsequent tissue death in other organs. Clarence Crafoord is credited with the first use of thrombosis prophylaxis in the 1930s.
Pathophysiology of blood clot prevention
The development of blood clots can be interrupted and prevented by the use of medication, changing risk factors and other interventions. Some risk factors can be modified. These would be losing weight, increasing exercise and the cessation of oral contraceptives. Moving during periods of travel is a modifiable behavior. Preventing blood clots includes the use of medications that interrupt the complex clotting cascade and changing the proteins that are needed for clotting. Antiplatelet drugs also have an effect in preventing the formation of clots.
Medical treatments
Thrombosis prophylaxis is not only used for the prevention of deep vein thrombosis, but can be initiated for the prevention of the formation of blood clots in other organs and circumstances unrelated to deep vein thrombosis:
cerebral complications
abortion
ectopic pregnancy
molar pregnancy
pregnancy
childbirth and the puerperium
coronary
portal vein thrombosis
intracranial, nonpyogenic
intraspinal, nonpyogenic
mesenteric
pulmonary
Epidemiology of developing blood clots
The risk of developing deep vein thrombosis, or pulmonary embolism is different than the total risk of the formation of blood clots. This is due to the observation that not all blood clots form in the lower legs. Most hospitalized medical patients have at least 1 risk factor for thrombosis that progresses to thromboembolism and this risk persists weeks after discharge. Those who remain undiagnosed and not treated prophylactically have a 26% chance of developing a fatal embolism. Another 26% develop another embolism. Between 5% and 10% of all in hospital deaths are due to pulmonary embolism (as a consequence of thrombosis). Estimates of the incidence of pulmonary embolism in the US is 0.1% persons/year. Hospital admissions in the US for pulmonary embolism are 200,000 to 300,000 yearly. Thrombosis that develops into DVT will affect 900,000 people and kill up to 100,000 in the US. On average 28,726 hospitalized adults aged 18 and older with a VTE blood clot diagnosis die each year. Risk of thrombosis is related to hospitalization. In 2005 the UK the Parliamentary Health Select Committee determined the annual rate of death due to thrombosis was 25,000 with at least 50% being hospital-acquired.
The type of surgery performed prior to the formation of blood clots influences the risk.
Without prophylactic interventions, the calculated incidence of clot formation in the lower leg veins after surgery is:
22% for neurosurgery
26% for abdominal surgery
45% for 60% in orthopedic surgery
14% for gynecologic surgery
As the population of the US ages, the development of blood clots is becoming more common.
General risks and indications for blood clot prevention
Some risk factors for developing blood clots are considered higher that others. One scoring system analyzes the probability for clot formation by assigning a point value system to significant risk factors. The benefit of treating those who are at low risk of developing blood clots may not outweigh the higher risks of significant bleeding.
{| class="wikitable"
|+ Probability and risk estimation for developing blood clots
! Major risk (=1 point) !! Minor risk (=2 points)
|-
| Cancer || Family history of deep vein thrombosis
|-
| Immobility || Hospitalization withinthe past 6 months
|-
| Calf swelling || superficial vein dilation
|-
| Recent major surgery || redness of area
|-
| Edema or swelling of only leg ||Recent traumato leg
|-
| Tenderness inthe calf and/or thigh ||
|-
|}
Risk for subsequent blood clots
Developing blood clots is more probable after the first episode. Risk assessment and intervention for those with one or more episodes of deep vein thrombosis or blood clots in the veins utilizes the Well's test. It has been inconsistently modified by a number of publishers with the results listed below:
Well's and modified Well's risk scoring
Adapted for the emergency department
Scoring:
less than 2 points – low risk (3%)
2–6 points moderate risk (17%)
> 6 points high risk (75%)
General interventions
The Centers for Disease Control and Prevention have issued general guidelines describing the interventions that can be taken to reduce the risk of the formation of blood clots:
Interventions during travel
Seat-edge pressure from the seat on an airplane on the popliteal area may contribute to vessel wall damage as well as venous stasis. Coagulation activation may result from an interaction between cabin conditions (such as hypobaric hypoxia) and individual risk factors for the formation of blood clots. Studies of the pathophysiologic mechanisms for the increased risk of Venous thrombosis embolism or VTE after long-distance travel have not produced consistent results, but venous stasis appears to play a major role; other factors specific to air travel may increase coagulation activation, particularly in passengers with individual risk factors for VTE.
Interventions for those hospitalized
Compression devices
Mechanical compression devices are used for prevention of thrombosis and are beneficial enough to be used by themselves with patients at low to moderate risk. The use of fitted intermittent pneumatic compression devices before, during and after procedures is used in inpatient settings. It consists of an air pump and inflatable auxiliary compartments that sequentially inflates and deflated to provide an external 'pump' that returns venous blood toward the heart. The use of intermittent pneumatic compression is common. These devices are also placed on a surgical patient in the operating room (the intra-surgical period) and remain on the person while recovering from the surgery.
The application of antiembolism stockings can be used to prevent thrombosis. The correct use and properly fitted graded compression stockings can reduce the rate of thrombosis by 50%. Contraindications for the use of antiembolism stockings include the presence of advanced peripheral and obstructive arterial disease, septic phlebitis, heart failure, open wounds, dermatitis and peripheral neuropathy. Differences between the use of thigh-high compression stockings and shorter types to prevent blood clots exist, but remain inconsistent.
Assessment
There has been some success in preventing blood clots by an early risk assessment upon admission to the hospital, which is a strategy recognized by the Centers for Disease Control and Prevention. Hospitals that have participated in this effort to reduce the incidence of thrombosis found that rates of DVT decreased in some instances. Some hospitals developed a mandatory assessment quantifying the risk for developing blood clots and a plan of care developed from the results. The person's risk for developing blood clots is entered into their record, 'following' them through their treatment regime. If the hospital stay exceeds three days, the person will be reassessed for risk. Clinicians are then able to apply protocols for prevention based upon best clinical practices.
Interventions to treat immobility
Immobility is a significant risk factor in the development of thrombosis. Immediate post-surgical interventions, such as out of bed orders (OOB), are typically ordered by the physician to prevent thrombosis. These orders, typically delegated to a nurse, but may include the participation of a physical therapist and others trained to perform the intervention, are to perform range of motion (ROM) activities that include: muscle contractions of the lower legs for those who are very weak, moving the feet, wiggling the toes, bending the knees, raise and lower the legs. In addition, changes in positioning prevents immobility and shifts areas of venous stasis. If the person is too weak to perform these preventative activities, hospital personnel will perform these movements independently. Exercise of the lower extremities is a post-operative method of prophylaxis. Nursing personnel will often perform range of motion exercises and encourage frequent moving of the legs, feet, and ankles. Frequent positioning changes and adequate fluid intake. After a surgical procedure, ambulation as soon as possible is prophylactic in preventing the formation of blood clots.
Early ambulation also prevents venous stasis and physicians order OOB activities on the same day of surgery. This is accomplished in increments. The progression of increasing mobility proceeds by: raising the head of the bed, sitting up in bed, moving to the edge of the bed, dangling the legs off the bed and then ambulating to a close chair.
Patient education and compliance reduces the risk of developing blood clots. These exercises and use of equipment and follow up by clinicians reduces the risk of developing blood clots.
Note that if a blood clot has already formed in the deep veins of the leg, bedrest is usually prescribed and the treatment to prevent bloods with physical intervention is contraindicated.
Medication
Anticoagulants and antiplatelets
Thromboprophylaxis, such as anticoagulants or perioperative heparin, is effective for hospitalized patients at risk for VTE. Additional risk factors such as obesity, disease, malignancies, long surgeries, and immobility may influence the prescribed dosage. Anticoagulant medications may prevent the formation of blood clots in people who are at high risk for their development. Treating blood clots that have already formed is managed by the use of anti-hemolytic ("clot busters"). Despite its effectiveness, the use of thromboprophylaxis remains under-utilized, though alerts (computer or human) in hospitals are associated with increased prescription and reductions in symptomatic VTE. The list below describes some of the more common medications used to prevent blood clots. Note that generally since blood clotting is inhibited, a side effect typically is increased bleeding, though it can be reversed by administering a medication that stops the bleeding or by discontinuation of the medication itself. Anti-coagulant administration is often given before the start of the operation. Medications that inhibit blood clot formation include:
Heparins
Adding heparin to the use of compression stockings may prevent thrombosis for those of higher risk.
{| class="wikitable sortable"
|-
|+ Heparin Prophylaxis
! style="background: #0FC;" |name !!style="background: #0FC;"|action !!style="background: #0FC;"|structure !! style="background: #0FC;"|references
|-
| || || ||
|-
| style="background: #FF9;" |Low-molecular weight heparin(example: Reviparin) || || ||
|-
|}
The discontinuation of contraceptives also prevents blood clots.
Herbal interactions
The therapeutic effects of warfarin may be decreased by valerian. Anticoagulants can be affected by chamomile. Dong quai, garlic, ginger, Ginkgo biloba, bilberry and feverfew can increase bleeding time. These same herbal supplements taken with warfarin increased prothrombin time.
Dietary interactions
By containing significant content of vitamin K, some foods act as antagonists to antiplatelet and anticoagulant medications; these include green leafy vegetables, like spinach, legumes, and broccoli.
Contraindications
Preventing blood clots with medication is not considered safe in the following circumstances:
uncooperative patient
recent childbirth
gastrointestinal bleeding
reproductive system bleeding
genitourinary systembleeding
hemorrhagic blood dyscrasias
peptic ulcers
alcoholism
infection
eye surgery
brain surgery
spinal cord surgery
recent cerebrovascular hemorrhage
Research
An international registry and risk assessment calculator is being used to centralize data on post-surgical venous thrombosis and its prevention. Hospitals are implementing a multi-disciplinary approach to prevent of blood clots. This includes adequate assessment of the risks, follow up on missed doses of medication and instituting a 'patient-centered' approach endorsed by the Joint Commission. Recommendations regarding the prevention of blood clots vary widely between clinicians and treatment facilities. Research continues to clarify these discrepancies. The metabolic state of hypercoagulability (the tendency to form blood clots) tests are being developed. These include the evaluation of the thrombin–antithrombin complexes (TAT), low levels of the anticoagulants ATIII and protein C, but these tests are not yet widely available.
References
Using Wikipedia for Research
Preventive medicine
Diseases of veins, lymphatic vessels and lymph nodes
Coagulopathies
Hematology
Antiplatelet drugs
Aspirin
Blood
Chemical substances for emergency medicine
Nutrition
Salicylates
Tissues (biology) | Thrombosis prevention | [
"Chemistry"
] | 3,143 | [
"Chemicals in medicine",
"Chemical substances for emergency medicine"
] |
49,423,341 | https://en.wikipedia.org/wiki/Gcn4 | Gcn4 is a transcription factor and a “master regulator” for gene expression which regulates close to one tenth of the yeast genome.
In a study by Razaghi et al, amino acid starvation activated the transcription factor Gcn4p, resulting in transcriptional induction of almost all genes involved in amino acid biosynthesis, including HIS4. Thus involvement of Gcn4 in regulation of both histidinol dehydrogenase HIS4 and interferon gamma hIFNγ was hypothesised as a scenario explaining the increased level of hIFNγ under amino acid starvation.
Overexpression of Gcn4 leads to the reduction in protein synthesis capacity which contributes to Gcn4-mediated increase of yeast lifespan.
In budding yeast, deletion of Gcn4 prevents HIS4 from targeting to the nuclear periphery upon transcriptional activation, indicating that Gcn4 is necessary for regulation of gene positioning and transcription.
See also
DNA-binding protein
Transcription factor
References
Transcription factors
Saccharomyces cerevisiae genes | Gcn4 | [
"Chemistry",
"Biology"
] | 220 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
49,425,496 | https://en.wikipedia.org/wiki/Partially%20premixed%20combustion | Partially premixed combustion (PPC), also known as PPCI (partially-premixed compression ignition) or GDCI (gasoline direct-injection compression-ignition) is a modern combustion process intended to be used in internal combustion engines of automobiles and other motorized vehicles in the future. Its high specific power, high fuel efficiency and low exhaust pollution have made it a promising technology. As a compression-ignition engine, the fuel mixture ignites due to the increase in temperature that occurs with compression rather than a spark from a spark plug. A PPC engine injects and premixes a charge during the compression stroke. This premixed charge is too lean to ignite during the compression stroke – the charge will ignite after the last fuel injection ends near TDC. The fuel efficiency and working principle of a PPC engine resemble those of Diesel engine, but the PPC engine can be run with a variety of fuels. Also, the partially premixed charge burns clean. Challenges with using gasoline in a PPC engine arise due to the low lubricity of gasoline and the low cetane value of gasoline. Use of fuel additives or gasoline-diesel or gasoline-biodiesel blends can mitigate the various problems with gasoline.
See also
Reactivity controlled compression ignition
Diesel engine
References
Notes
Combustion
Fuel technology
Internal combustion piston engines | Partially premixed combustion | [
"Chemistry"
] | 276 | [
"Combustion"
] |
28,289,092 | https://en.wikipedia.org/wiki/Dynamical%20neuroscience | The dynamical systems approach to neuroscience is a branch of mathematical biology that utilizes nonlinear dynamics to understand and model the nervous system and its functions. In a dynamical system, all possible states are expressed by a phase space. Such systems can experience bifurcation (a qualitative change in behavior) as a function of its bifurcation parameters and often exhibit chaos. Dynamical neuroscience describes the non-linear dynamics at many levels of the brain from single neural cells to cognitive processes, sleep states and the behavior of neurons in large-scale neuronal simulation.
Neurons have been modeled as nonlinear systems for decades, but dynamical systems are not constrained to neurons. Dynamical systems can emerge in other ways in the nervous system. Chemical species models, like the Gray–Scott model, can exhibit rich, chaotic dynamics. Intraneural communication is affected by dynamic interactions between extracellular fluid pathways. Information theory draws on thermodynamics in the development of infodynamics that can involve nonlinear systems, especially with regards to the brain.
History
One of the earliest models of the neuron was based on mathematical and physical modelling: the integrate-and-fire model, which was developed in 1907. Decades later, the discovery of the squid giant axon led Alan Hodgkin and Andrew Huxley (half-brother to Aldous Huxley) to develop the Hodgkin–Huxley model of the neuron in 1952. This model was simplified with the FitzHugh–Nagumo model in 1962. By 1981, the Morris–Lecar model had been developed for the barnacle muscle.
These mathematical models proved useful and are still used by the field of biophysics today, but a late 20th century development propelled the dynamical study of neurons even further: computer technology. The largest issue with physiological equations like the ones developed above is that they were nonlinear. This made the standard analysis impossible and any advanced kinds of analysis included a number of (nearly) endless possibilities. Computers opened a lot of doors for all of the hard sciences in terms of their ability to approximate solutions to nonlinear equations. This is the aspect of computational neuroscience that dynamical systems encompasses.
In 2007, a canonical text book was written by Eugene Izhikivech called Dynamical Systems in Neuroscience, assisting the transformation of an obscure research topic into a line of academic study.
Neuron dynamics
(intro needed here)
Electrophysiology of the neuron
The motivation for a dynamical approach to neuroscience stems from an interest in the physical complexity of neuron behavior. As an example, consider the coupled interaction between a neuron's membrane potential and the activation of ion channels throughout the neuron. As the membrane potential of a neuron increases sufficiently, channels in the membrane open up to allow more ions in or out. The ion flux further alters the membrane potential, which further affects the activation of the ion channels, which affects the membrane potential, and so on. This is often the nature of coupled nonlinear equations. A nearly straight forward example of this is the Morris–Lecar model:
See the Morris–Lecar paper for an in-depth understanding of the model. A more brief summary of the Morris Lecar model is given by Scholarpedia.
In this article, the point is to demonstrate the physiological basis of dynamical neuron models, so this discussion will only cover the two variables of the equation:
represents the membrane's current potential
is the so-called "recovery variable", which gives us the probability that a particular potassium channel is open to allow ion conduction.
Most importantly, the first equation states that the change of with respect to time depends on both and , as does the change in with respect to time. and are both functions of . So we have two coupled functions, and .
Different types of neuron models utilize different channels, depending on the physiology of the organism involved. For instance, the simplified two-dimensional Hodgkins–Huxley model considers sodium channels, while the Morris–Lecar model considers calcium channels. Both models consider potassium and leak current. Note, however, that the Hodgkins–Huxley model is canonically four-dimensional.
Excitability of neurons
One of the predominant themes in classical neurobiology is the concept of a digital component to neurons. This concept was quickly absorbed by computer scientists where it evolved into the simple weighting function for coupled artificial neural networks. Neurobiologists call the critical voltage at which neurons fire a threshold. The dynamical criticism of this digital concept is that neurons don't truly exhibit all-or-none firing and should instead be thought of as resonators.
In dynamical systems, this kind of property is known as excitability. An excitable system starts at some stable point. Imagine an empty lake at the top of a mountain with a ball in it. The ball is in a stable point. Gravity is pulling it down, so it's fixed at the lake bottom. If we give it a big enough push, it will pop out of the lake and roll down the side of the mountain, gaining momentum and going faster. Let's say we fashioned a loop-de-loop around the base of the mountain so that the ball will shoot up it and return to the lake (no rolling friction or air resistance). Now we have a system that stays in its rest state (the ball in the lake) until a perturbation knocks it out (rolling down the hill) but eventually returns to its rest state (back in the lake). In this example, gravity is the driving force and spatial dimensions x (horizontal) and y (vertical) are the variables. In the Morris Lecar neuron, the fundamental force is electromagnetic and and are the new phase space, but the dynamical picture is essentially the same. The electromagnetic force acts along just as gravity acts along . The shape of the mountain and the loop-de-loop act to couple the y and x dimensions to each other. In the neuron, nature has already decided how and are coupled, but the relationship is much more complicated than the gravitational example.
This property of excitability is what gives neurons the ability to transmit information to each other, so it is important to dynamical neuron networks, but the Morris Lecar can also operate in another parameter regime where it exhibits oscillatory behavior, forever oscillating around in phase space. This behavior is comparable to pacemaker cells in the heart, that don't rely on excitability but may excite neurons that do.
Global neurodynamics
The global dynamics of a network of neurons depend on at least the first three of four attributes:
individual neuron dynamics (primarily, their thresholds or excitability)
information transfer between neurons (generally either synapses or gap junctions
network topology
external forces (such as thermodynamic gradients)
There are many combinations of neural networks that can be modeled between the choices of these four attributes that can result in a versatile array of global dynamics.
Biological neural network modeling
Biological neural networks can be modeled by choosing an appropriate biological neuron model to describe the physiology of the organism and appropriate coupling terms to describe the physical interactions between neurons (forming the network). Other global considerations must be taken into consideration, such as the initial conditions and parameters of each neuron.
In terms of nonlinear dynamics, this requires evolving the state of the system through the functions. Following from the Morris Lecar example, the alterations to the equation would be:
where now has the subscript , indicating that it is the ith neuron in the network and a coupling function has been added to the first equation. The coupling function, D, is chosen based on the particular network being modeled. The two major candidates are synaptic junctions and gap junctions.
Attractor network
Point attractors – memory, pattern completion, categorizing, noise reduction
Line attractors – neural integration: oculomotor control
Ring attractors – neural integration: spatial orientation
Plane attractors – neural integration: (higher dimension of oculomotor control)
Cyclic attractors – central pattern generators
Chaotic attractors – recognition of odors and chaos is often mistaken for random noise.
Please see Scholarpedia's page for a formal review of attractor networks.
Beyond neurons
While neurons play a lead role in brain dynamics, it is becoming more clear to neuroscientists that neuron behavior is highly dependent on their environment. But the environment is not a simple background, and there is a lot happening right outside of the neuron membrane, in the extracellular space. Neurons share this space with glial cells and the extracellular space itself may contain several agents of interaction with the neurons.
Glia
Glia, once considered a mere support system for neurons, have been found to serve a significant role in the brain. The subject of how the interaction between neuron and glia have an influence on neuron excitability is a question of dynamics.
Neurochemistry
Like any other cell, neurons operate on an undoubtedly complex set of molecular reactions. Each cell is a tiny community of molecular machinery (organelles) working in tandem and encased in a lipid membrane. These organelles communicate largely via chemicals like G-proteins and neurotransmitters, consuming ATP for energy. Such chemical complexity is of interest to physiological studies of the neuron.
Neuromodulation
Neurons in the brain live in an extracellular fluid, capable of propagating both chemical and physical energy alike through reaction-diffusion and bond manipulation that leads to thermal gradients. Volume transmission has been associated with thermal gradients caused by biological reactions in the brain. Such complex transmission has been associated with migraines.
Cognitive neuroscience
The computational approaches to theoretical neuroscience often employ artificial neural networks that simplify the dynamics of single neurons in favor of examining more global dynamics. While neural networks are often associated with artificial intelligence, they have also been productive in the cognitive sciences. Artificial neural networks use simple neuron models, but their global dynamics are capable of exhibiting both Hopfield and Attractor-like network dynamics.
Hopfield network
The Lyapunov function is a nonlinear technique used to analyze the stability of the zero solutions of a system of differential equations. Hopfield networks were specifically designed such that their underlying dynamics could be described by the Lyapunov function. Stability in biological systems is called homeostasis. Particularly of interest to the cognitive sciences, Hopfield networks have been implicated in the role of associative memory (memory triggered by cues).
See also
Computational neuroscience
Dynamicism
Mathematical biology
Nonlinear systems
Randomness
Neural oscillation
References
Branches of neuroscience
Dynamical systems
Mathematical and theoretical biology | Dynamical neuroscience | [
"Physics",
"Mathematics"
] | 2,176 | [
"Mathematical and theoretical biology",
"Applied mathematics",
"Mechanics",
"Dynamical systems"
] |
28,289,950 | https://en.wikipedia.org/wiki/Adiabatic%20accessibility | Adiabatic accessibility denotes a certain relation between two equilibrium states of a thermodynamic system (or of different such systems). The concept was coined by Constantin Carathéodory in 1909 ("adiabatische Erreichbarkeit") and taken up 90 years later by Elliott Lieb and J. Yngvason in their axiomatic approach to the foundations of thermodynamics. It was also used by R. Giles in his 1964 monograph.
Description
A system in a state Y is said to be adiabatically accessible from a state X if X can be transformed into Y without the system suffering transfer of energy as heat or transfer of matter. X may, however, be transformed to Y by doing work on X. For example, a system consisting of one kilogram of warm water is adiabatically accessible from a system consisting of one kilogram of cool water, since the cool water may be mechanically stirred to warm it. However, the cool water is not adiabatically accessible from the warm water, since no amount or type of work may be done to cool it.
Carathéodory
The original definition of Carathéodory was limited to reversible, quasistatic process, described by a curve in the manifold of equilibrium states of the system under consideration. He called such a state change adiabatic if the infinitesimal 'heat' differential form
vanishes along the curve. In other words, at no time in the process does heat enter or leave the system. Carathéodory's formulation of the second law of thermodynamics then takes the form: "In the neighbourhood of any initial state, there are states which cannot be approached arbitrarily close through adiabatic changes of state." From this principle he derived the existence of entropy as a state function
whose differential is proportional to the heat differential form , so it remains constant under adiabatic state changes (in Carathéodory's sense). The increase of entropy during irreversible
processes is not obvious in this formulation, without further assumptions.
Lieb and Yngvason
The definition employed by Lieb and Yngvason is rather different since the state changes considered can be the result of arbitrarily complicated, possibly violent, irreversible processes and there is no mention of 'heat' or differential forms. In the example of the water given above, if the stirring is done slowly, the transition from cool water to warm water will be quasistatic. However, a system containing an exploded firecracker is adiabatically accessible from a system containing an unexploded firecracker (but not vice versa), and this transition is far from quasistatic. Lieb and Yngvason's definition of adiabatic accessibility is: A state is adiabatically accessible from a state , in symbols (pronounced X 'precedes' Y), if it is possible to transform into in such a way that the only net effect of the process on the surroundings is that a weight has been raised or lowered (or a spring is stretched/compressed, or a flywheel is set in motion).
Thermodynamic entropy
A definition of thermodynamic entropy can be based entirely on certain properties of the relation of adiabatic accessibility that are taken as axioms in the Lieb-Yngvason approach. In the following list of properties of the operator, a system is represented by a capital letter, e.g. X, Y or Z. A system X whose extensive parameters are multiplied by is written . (e.g. for a simple gas, this would mean twice the amount of gas in twice the volume, at the same pressure.) A system consisting of two subsystems X and Y is written (X,Y). If and are both true, then each system can access the other and the transformation taking one into the other is reversible. This is an equivalence relationship written . Otherwise, it is irreversible. Adiabatic accessibility has the following properties:
Reflexivity:
Transitivity: If and then
Consistency: if and then
Scaling Invariance: if and then
Splitting and Recombination: for all
Stability: if then
The entropy has the property that if and only if and if and only if in accord with the second law. If we choose two states and such that and assign entropies 0 and 1 respectively to them, then the entropy of a state X where is defined as:
Sources
References
translated from André Thess: Das Entropieprinzip - Thermodynamik für Unzufriedene, Oldenbourg-Verlag 2007, . A less mathematically intensive and more intuitive account of the theory of Lieb and Yngvason.
External links
A. Thess: Was ist Entropie?
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic processes
Thermodynamic systems
Thermodynamics | Adiabatic accessibility | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,022 | [
"Thermodynamic systems",
"Thermodynamic processes",
"Equilibrium chemistry",
"Physical systems",
"Thermodynamics",
"Dynamical systems"
] |
28,290,623 | https://en.wikipedia.org/wiki/Randles%E2%80%93Sevcik%20equation | In electrochemistry, the Randles–Ševčík equation describes the effect of scan rate on the peak current () for a cyclic voltammetry experiment. For simple redox events where the reaction is electrochemically reversible, and the products and reactants are both soluble, such as the ferrocene/ferrocenium couple, depends not only on the concentration and diffusional properties of the electroactive species but also on scan rate.
Or if the solution is at 25 °C:
= current maximum in amps
= number of electrons transferred in the redox event (usually 1)
= electrode area in cm2
= Faraday constant in C mol−1
= diffusion coefficient in cm2/s
= concentration in mol/cm3
= scan rate in V/s
= Gas constant in J K−1 mol−1
= temperature in K
The constant with a value of 2.69×105 has units of C mol−1 V−1/2
For novices in electrochemistry, the predictions of this equation appear counter-intuitive, i.e. that increases at faster voltage scan rates. It is important to remember that current, i, is charge (or electrons passed) per unit time. In cyclic voltammetry, the current passing through the electrode is limited by the diffusion of species to the electrode surface. This diffusion flux is influenced by the concentration gradient near the electrode. The concentration gradient, in turn, is affected by the concentration of species at the electrode, and how fast the species can diffuse through solution. By changing the cell voltage, the concentration of the species at the electrode surface is also changed, as set by the Nernst equation. Therefore, a faster voltage sweep causes a larger concentration gradient near the electrode, resulting in a higher current.
Derivation
This equation is derived using the following governing equations and initial/boundary conditions:
= distance from a planar electrode in cm
= time in seconds
= the potential of the electrode in volts
= the initial potential of the electrode in volts
= the formal potential for the reaction between the oxidized () and reduced () species
Uses
Using the relationships defined by this equation, the diffusion coefficient of the electroactive species can be determined. Linear plots of ip vs. ν1/2 and peak potentials (Ep) that are not dependent on ν provide evidence for an electrochemically reversible redox process. For species where the diffusion coefficient is known (or can be estimated), the slope of the plot of ip vs. ν1/2 provides information into the stoichiometry of the redox process, the concentration of the analyte, the area of the electrode, etc.
A more general investigation method is the plot of the peak currents as function of the scan rate on a logarithmically scaled x-axis. Deviations become easily detectable and the more general fit formula
can be used.
In this equation is the current at zero scan rate at the equilibrium potential . In the electrochemical lab experiment may be small but can nowadays easily be monitored with a modern equipment. For example corrosion processes may lead to a not vanishing but still detectable . When and x is close to 0.5 a reaction mechanism according to Randles Sevcik can be assigned.
An example for this kind of reaction mechanism is the redox reaction of species as an analyte (concentration 5mM each species) in a highly concentrated (1M) background solution on graphite electrode.
A more detailed plot with all fit parameters can be seen here.
References
See also
Berzins-Delahay equation
Online calculator for use of Randles–Sevcik equation:http://www.calctool.org/CALC/chem/electrochem/cv1
Electrochemical equations | Randles–Sevcik equation | [
"Chemistry",
"Mathematics"
] | 773 | [
"Electrochemistry stubs",
"Mathematical objects",
"Equations",
"Electrochemistry",
"Analytical chemistry stubs",
"Physical chemistry stubs",
"Electrochemical equations"
] |
43,390,067 | https://en.wikipedia.org/wiki/MACS%20J0416.1-2403 | MACS J0416.1-2403 or MACS0416 abbreviated, is a cluster of galaxies at a redshift of z=0.397 with a mass 160 trillion times the mass of the Sun inside . Its mass extends out to a radius of and was measured as 1.15 × 1015 solar masses. The system was discovered in images taken by the Hubble Space Telescope during the Massive Cluster Survey, MACS. This cluster causes gravitational lensing of distant galaxies producing multiple images. Based on the distribution of the multiple image copies, scientists have been able to deduce and map the distribution of dark matter. The images, released in 2014, were used in the Cluster Lensing And Supernova survey with Hubble (CLASH) to help scientists peer back in time at the early Universe and to discover the distribution of dark matter.
Gallery
See also
Mothra (star)
References
Dark matter
Galaxy clusters
Eridanus (constellation) | MACS J0416.1-2403 | [
"Physics",
"Astronomy"
] | 196 | [
"Dark matter",
"Galaxy clusters",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Constellations",
"Eridanus (constellation)",
"Exotic matter",
"Astronomical objects",
"Physics beyond the Standard Model",
"Matter"
] |
43,395,885 | https://en.wikipedia.org/wiki/Supermembranes | Supermembranes are hypothesized objects that live in the 11-dimensional theory called M-Theory and should also exist in eleven-dimensional supergravity. Supermembranes are a generalisation of superstrings to another dimension. Supermembranes are 2-dimensional surfaces. For example, they can be spherical or shaped like a torus. As in superstring theory the vibrations of the supermembranes correspond to different particles. Supermembranes also exhibit a symmetry called supersymmetry without which the vibrations would only correspond to bosons and not fermions.
Energy
The energy of a classical supermembrane is given by its surface area. One consequence of this is that there is no difference between one or two membranes since two membranes can be connected by a long 1 dimensional string of zero area. Hence, the idea of 'membrane-number' has no meaning. A second consequence is that unlike strings a supermembrane's vibrations can represent several particles at once. In technical terms this means it is already 'second-quantized'. All the particles in the Universe can be thought to arise as vibrations of a single membrane.
Spectrum
When going from the classical theory to the quantum theory of supermembranes it is found that they can only exist in 11 dimensions, just as superstrings can only exist in 10 dimensions. When examining the energy spectrum (the allowed frequencies that a string can vibrate in) it was found that they can only be in discrete values corresponding to the masses of different particles.
It has been shown:
The energy spectrum for the classical bosonic membrane is continuous.
The energy spectrum for the quantum bosonic membrane is discrete.
The energy spectrum for the quantum supermembrane is continuous.
At first the discovery that the spectrum was continuous was thought to mean the theory didn't make sense. But it was realised that it meant that supermembranes actually correspond to multiple particles. (The continuous degrees of freedom corresponding to the coordinates/momenta of the additional particles).
Action
The action for a classical membrane is simply the surface area of the world sheet. The quantum version is harder to write down, is non-linear and very difficult to solve. Unlike the superstring action which is quadratic, the supermembrane action is quartic which makes it exponentially harder. Adding to this the fact that a membrane can represent many particles at once not much progress has been made on supermembranes.
Low energy sector
It has been proven that the low energy vibrations of the supermembrane correspond to the particles in 11 dimensional supergravity.
Topology
A supermembrane can have multiple thing tubes or strings coming out of it with little or no extra energy cost since strings, for example, have no area. This means that all orientable topologies of membranes are physically the same. Also, joined and disjointed supermembranes are physically the same. Thus the topology of a supermembrane has no physical meaning.
Mathematics
The infinite supermembrane can be described in terms of an infinite number of patches. The coordinates of (each patch of) a supermembrane at any casual slice of time are 11 dimensional and depend on two continuous parameters and a third integer parameter (k) denoting the patch number:
Therefore, the super membrane can describe an infinite number of particles if we associate somehow the coordinate of each particle with some topological property of the patches - perhaps holes in the membrane or closed loops.
Supermembrane Field Theory
Since supermembranes correspond to multiple particles the field theory of membranes correspond to a Fock space. Informally, let a(x) denote the continuous degrees of freedom in the energy spectrum:
The action can be written as
where Q is the kinetic operator. No interaction terms are needed since there is no concept of membrane number. Everything is the same membrane. The action is not quite the same type as the one for superstrings or particles since it involves terms with multiple particles. The terms relating to single fields must recover the classical field equations of Dirac, Maxwell and Einstein. The propagator to get from a state with membrane X to one at another conformal slice with membrane Y is:
And since each membrane corresponds to any number of identical particles this is equivalent to all the Green's functions for many particle collisions at once!
Although it looks like a lot of things simplify in the supermembrane picture, the actual form of the kinetic operator Q is yet unknown and must be a very complicated operator acting on an infinite Fock-like space. Hence the seeming simplicity of the theory is hidden in this operator.
Cosmology
Since the vibrations of a supermembrane of infinite energy can correspond to every particle in the Universe at once it is possible to interpret the supermembrane as equivalent to the Universe. i.e. all that exists is the supermembrane. It makes no difference to say we live on this supermembrane or that we are in 11 dimensional space-time. Every state of the Universe corresponds to a supermembrane and every history of the Universe corresponds to a supermembrane world volume. What we think of as space-time coordinates can equally be thought of as vector fields on the 2+1 dimensional supermembrane.
For a supermembrane moving at the speed of light, its world volume can be zero due to the metric (+++-). Thus the Big Bang can be thought of as a spherical membrane expanding at the speed of light. This has interesting interpretations in terms of the holographic principle.
Geometry
Because the supermembrane(s) correspond to all particles at a particular causal time slice, it also corresponds to all the gravitons particles (which are particular vibrational modes). Thus the geometry of the 2+1D supermembrane contains within it the description of the geometry of the (macroscopic) 10+1D space-time. But as it is a quantum theory it gives probabilities for different space-times consistent with observation. The different space-times may only differ microscopically whereas the macroscopic space-time is smooth. In other words, the geometry of the membrane determines the geometry of (macroscopic) space-time. This is different from string theory where only condensates of many separate strings can macroscopically determine the space-time.
Super-5-branes
M-Theory and eleven-dimensional supergravity also predict 5+1D objects called super-5-branes. An alternative cosmological theory is that we live on one of these branes.
Compactification
Compactifying one space-time dimension on a circle and wrapping the membrane around this circle gives us superstring theory. To get back to our 3+1 dimensional universe the space-time coordinates need to be compactified on a 7 dimensional manifold (of G2 holonomy). Not much is known about these types of shapes.
Matrix Theory
Matrix theory is a particular way of formulating supermembrane theory. It is still in development. The diagonal entries of an infinite dimensional matrix can be thought of as different supermembranes (parts) connected by 1 dimensional strings.
References
J. Hughes, L Jun, J Polchinski, "Supermembranes", Physics Letters B (1988)
String theory
Supersymmetry | Supermembranes | [
"Physics",
"Astronomy"
] | 1,527 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"String theory",
"Supersymmetry",
"Symmetry"
] |
32,074,177 | https://en.wikipedia.org/wiki/ARIANNA%20Experiment | Antarctic Ross Ice-Shelf Antenna Neutrino Array (ARIANNA) is a proposed detector for ultra-high energy astrophysical neutrinos. It will detect coherent radio Cherenkov emissions from the particle showers produced by neutrinos with energies above about 10^17 eV. ARIANNA will be built on the Ross Ice Shelf just off the coast of Antarctica, where it will eventually cover about 900 km^2 in surface area. There, the ice-water interface below the shelf reflects radio waves, giving ARIANNA sensitivity to downward going neutrinos and improving its sensitivity to horizontally incident neutrinos. ARIANNA detector stations will each contain 4-8 antennas which search for brief pulses of 50 MHz to 1 GHz radio emission from neutrino interactions.
As of 2016, a prototype array consisting of 7 stations had been deployed, and was taking data. An initial search for neutrinos was made; none were found, and an upper limit was generated.
References
External links
ARIANNA Home Page
Neutrino experiments
Astrophysics | ARIANNA Experiment | [
"Physics",
"Astronomy"
] | 213 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
32,074,203 | https://en.wikipedia.org/wiki/Advanced%20thermal%20recycling%20system | An advanced thermal recycling system (or an ATR system) is the commercial brand name of the waste-to-energy incineration offering by Klean Power, which has been implemented in a single plant in Germany in 1999. WtE facilities such as the ATR transforms municipal solid waste (MSW) into electricity or steam for district heating or industrial customers. The combustion bottom ash, and the combustion fly ash, along with the air pollution control system fly ash, are treated to produce products that can be beneficially reused. Specifically, ATR systems consist of the following:
Solid waste combustion, boiler and combustion control system, energy recovery and air pollution control equipment;
Combustion bottom ash and fly ash treatment systems that produce commercially reusable products; and
An optional pre-processing system to recover recyclable materials contained in the MSW delivered to the facility before the MSW enters the thermal processing area of the facility.
Reference facility
One commercially operating ATR facility has been built so far. It is the Müllverwertung Rugenberger Damm WtE plant in Hamburg, Germany, commissioned in 1999. The German Green Party has endorsed the specific features of this facility in its "Concept 2020" initiative to cease all landfilling of waste by 2020 as an essential part of an integrated waste management system achieving the highest standards in the energy-from-waste industry. No landfilling of unprocessed waste has been allowed in Germany since 2005.
Description of Hamburg facility
Overhead refuse cranes are used to hold approximately five tons of garbage each. The waste is then mixed in the bunker to create a homogeneous mixture to ensure that the bottom ash byproduct has good combustion, and low carbon content. These cranes then deliver the mixed waste into the feeding hopper, which leads down onto stoker grates. These grates control the rate at which the waste travels through the boiler. The heat ignites the trash as it moves along the forward feeding grates until only the byproduct bottom ash remains at the end of the grate. Each combustion line feeds a boiler that operates above 1,560 °F (850 °C) for two seconds. The temperature in the combustion zone is measured through acoustic monitoring. A computer controls the temperature, the grate speed, the amount of air used, and all other aspects of the process that enable complete combustion and minimization of emissions.
Maintaining the furnace's high temperature is essential to rid the waste and the resulting combustion gases of complex organic compounds such as dioxins and furans. To prevent the reformulation of pollutants, fly ash is separated from the flue gas downstream of the superheaters to reduce the fly ash content, which could act as a catalyst in the critical reformulation temperature range of . At the exit of the boiler, the flue gas is cooled down to a level of .
As the waste is combusted, heat is released in the boiler. This heat produces high-pressure, high-temperature steam, which generates electrical energy when passed through a turbine generator. The electricity is fed into the public power grid or sold directly to a customer. The steam can also be exported directly for use in district heating or industrial processes.
Each unit has an independent air pollution control system. Flue gas cleaning begins in the boiler, where oxides of nitrogen are reduced by injecting ammonia water into the combustion chamber. Lightly loaded absorbents (activated carbon from the second bag house) are injected into the flue gas downstream of the first bag house to separate any contaminants that have reformed (such as organic compounds), any condensed heavy metals, salts and other gaseous contaminants, as well as residue fly ash.
The first baghouse makes it possible to produce reusable by-products such as hydrochloric acid and gypsum from the consecutive air pollution control process steps. Acid gases are removed from the flue gases by passing through a two-stage scrubber to remove acid components, especially halogen compounds such as hydrochloric acid and hydrofluoric acid. A counter-flow neutral scrubber follows, using a lime slurry to remove sulphur oxides. The pollutant gases are either dissolved in water droplets (acids) or bound as calcium salts and thereby removed from the flue gas. A second baghouse acts as a polishing filter to capture any remaining aerosols, organic compounds and heavy metals, which thereby are reduced to levels usually below detection.
Following combustion, the material left consists of the non-combustible components of the waste and the inert materials produced during combustion. This is known as bottom ash. The bottom ash is washed to eliminate soluble salts. Iron scrap and non-ferrous metals such as aluminium, copper and brass are separated and sold in secondary metals markets. The bottom ash is then screened, crushed and sold for use as a construction material.
Gypsum is created when the oxides of sulphur (SO2 and SO3) are separated by the single stage scrubber. It is purified, then sold to the construction industry.
The acid scrubbing process in the flue gas treatment system also produces a raw hydrochloric acid at a concentration of 10% to 12%. The acid is distilled (rectified) to yield commercial-grade (30% concentration) hydrochloric acid.
Fly ash, separated in the boiler and baghouses and constituting up to 5% by weight of the combusted MSW, is treated to recover metals and minerals for reuse, resulting in an overall ATR process landfill diversion rate of approximately 98.5%.
References
Recycling
Waste management
Energy technology
Environmental engineering
Sustainable technologies | Advanced thermal recycling system | [
"Chemistry",
"Engineering"
] | 1,162 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
29,715,644 | https://en.wikipedia.org/wiki/Dielectric%20wall%20accelerator | A dielectric wall accelerator (DWA) is a compact linear particle accelerator concept designed and patented in the late 1990s, that works by inducing a travelling electromagnetic wave in a tube which is constructed mostly from a dielectric material. The main conceptual difference to a conventional disk-loaded linac system is given by the additional dielectric wall and the coupler construction.
Possible uses of this concept include its application in external beam radiotherapy (EBRT) using protons or ions.
Operation
An external alternating-current power supply provides an electromagnetic wave that is transmitted to the accelerator tube using a waveguide. The power supply is switched on only a very short time (pulsed operation).
Electromagnetic induction creates a traveling electric field, which accelerates charged particles. The traveling wave overlaps with the position of the charged particles, leading to their acceleration inside as they pass through the tube's vacuum channel. The field inside the tube is negative just ahead of the proton and positive just behind the proton. Because protons are positively charged, they accelerate toward the negative and away from the positive. The power supply switches the polarity of the sections, so they stay synchronized with the passing proton.
Construction
The accelerator tube is made from sheets of fused silica, only 250 μm thick. After polishing, the sheets are coated with 0.5 μm of chromium and 2.5 μm of gold. About 80 layers of the sheets are stacked together, and then heated in a brazing furnace, where they fuse together. The stacked assembly is then machined into a hollow cylinder. Fused silica is pure transparent quartz glass, a dielectric, which is why the machine is called a "dielectric wall accelerator."
A sketch of one of the assembled modules of the accelerator is shown in the patent sketch. The module is about 3 cm long, and the beam travels upward. The dielectric wall is seen as item number 81. It is surrounded by a pulse-forming device called a Blumlein. In figure 8A, the power supply charges the Blumlein. In figure 8B, silicon carbide switches surrounding the Blumlein close, shorting out the edge of the Blumlein. The energy stored in the Blumlein rushes toward the dielectric wall as a high voltage pulse.
Usage in proton therapy
Dielectric wall accelerators have the potential to replace the currently used proton accelerators in radiation therapy, due to their smaller size, cost advantages, and reduced shielding requirements.
Advantages and limitations
The DWA addresses the main issues with the current proton therapy systems—cost and size. Depending on the desired final beam energy, the conventional medical accelerator solutions (cyclotrons and small synchrotrons) can have large cost factors and space requirements, which could be circumvented by DWAs. The cost estimate for a DWA is about 20 million US dollars.
DWAs are expected to reach acceleration gradients around 100 MV/m.
The system is a spinoff of a DOE device to inspect nuclear weapons. This system requires several new advances because of the high energies, e.g., high gradient insulators.
A wide band-gap photoconductive switch, about 4,000, is needed.
A symmetric Blumlein, typical width 1 mm.
References
External links
Dielectric Wall Accelerator G. J. Caporaso, Y.-J. Chen, S. E. Sampayan September 3, 2009, Reviews of Accelerator Science and Technology
High Gradient Dielectric Wall Accelerators Muon Collider Design Workshop, December 8–12, 2008, Thomas Jefferson National Laboratory
Ultra‐High‐Current Electron Induction Accelerators Physics today [0031-9228] Kapetanakos, C yr:1985 vol:38 iss:2 pg:58
US Patent 7924121 Dispersion-Free Radial Transmission Lines, April 12, 2011
Accelerator physics | Dielectric wall accelerator | [
"Physics"
] | 803 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
29,719,481 | https://en.wikipedia.org/wiki/Extraction%20%28chemistry%29 | Extraction in chemistry is a separation process consisting of the separation of a substance from a matrix. The distribution of a solute between two phases is an equilibrium condition described by partition theory. This is based on exactly how the analyte moves from the initial solvent into the extracting solvent. The term washing may also be used to refer to an extraction in which impurities are extracted from the solvent containing the desired compound.
Types of extraction
Liquid–liquid extraction
Acid-base extraction
Supercritical fluid extraction
Solid-liquid extraction
Solid-phase extraction
Maceration
Ultrasound-assisted extraction
Microwave-assisted extraction
Heat reflux extraction
Instant controlled pressure drop extraction (Détente instantanée contrôlée)
Perstraction
Laboratory applications and examples
Liquid-liquid extractions in the laboratory usually make use of a separatory funnel, where two immiscible phases are combined to separate a solute from one phase into the other, according to the relative solubility in each of the phases. Typically, this will be to extract organic compounds out of an aqueous phase and into an organic phase, but may also include extracting water-soluble impurities from an organic phase into an aqueous phase.
Common extractants may be arranged in increasing order of polarity according to the Hildebrand solubility parameter:
ethyl acetate < acetone < ethanol < methanol < acetone:water (7:3) < ethanol:water (8:2) < methanol:water (8:2) < water
Solid-liquid extractions at laboratory scales can use Soxhlet extractors. A solid sample containing the desired compound along with impurities is placed in the thimble. An extracting solvent is chosen in which the impurities are insoluble and the desired compound has at least limited solubility. The solvent is refluxed and condensed solvent falls into the thimble and dissolves the desired compound which then passes back through the filter into the flask. After extraction is complete the solvent can be removed and the desired product collected.
Everyday applications and examples
Boiling tea leaves in water extracts the tannins, theobromine, and caffeine out of the leaves and into the water, as an example of a solid-liquid extraction.
Decaffeination of tea and coffee is also an example of an extraction, where the caffeine molecules are removed from the tea leaves or coffee beans, often utilising supercritical fluid extraction with CO2 or standard solid-liquid extraction techniques.
See also
Sample preparation (analytical chemistry)
Solvent
Solvent impregnated resins
Thin Layer Extraction
Leaching (Chemistry)
References
Further reading
Gunt Hamburg, 2014, Thermal Process Engineering: Liquid-liquid extraction and solid-liquid extraction, see , accessed 12 May 2014.
Colin Poole & Michael Cooke, 2000, Extraction, in Encyclopedia of Separation Science, 10 Vols., , accessed 12 May 2014.
R. J. Wakeman, 2000, "Extraction, Liquid-Solid", in Kirk-Othmer Encyclopedia of Chemical Technology, , accessed 12 May 2014.
M.J.M. Wells, 2000, "Essential guides to method development in solid-phase extraction," in Encyclopedia of Separation Science, Vol. 10 (I.D. Wilson, E.R. Adlard, M. Cooke, and C.F. Poole, eds.), London:Academic Press, London, 2000, pp. 4636–4643.
External links
Analytical chemistry | Extraction (chemistry) | [
"Chemistry"
] | 708 | [
"Extraction (chemistry)",
"nan",
"Separation processes"
] |
29,722,876 | https://en.wikipedia.org/wiki/Relativistic%20similarity%20parameter | In relativistic laser-plasma physics the relativistic similarity parameter S is a dimensionless parameter defined as
,
where is the electron plasma density, is the critical plasma density and is the normalized vector potential. Here is the electron mass, is the electron charge, is the speed of light, the electric vacuum permittivity and is the laser frequency.
The concept of similarity and the similarity parameter were first introduced in plasma physics by Sergey Gordienko. It allows distinguishing between relativistically overdense and underdense plasmas .
The similarity parameter is connected to basic symmetry properties of the collisionless Vlasov equation and is thus the relativistic plasma analog of the Reynolds number in fluid mechanics. Gordienko showed that in the relativistic limit () the laser-plasma dynamics depends on three dimensionless parameters: , and , where is the duration of the laser pulse and is the typical radius of the laser waist. The main result of the relativistic similarity theory can be summarized as follows: if the parameters of the interaction (plasma density and laser amplitude) change simultaneously so that the parameter remains constant, the dynamics of the electrons remains the same.
The similarity theory allows deriving non-trivial power-law scalings for the energy of fast electrons in underdense and overdense plasmas.
References
Plasma parameters | Relativistic similarity parameter | [
"Physics"
] | 278 | [
"Plasma physics stubs",
"Plasma physics"
] |
45,194,398 | https://en.wikipedia.org/wiki/Domain%20reduction%20algorithm | Domain reduction algorithms are algorithms used to reduce constraints and degrees of freedom in order to provide solutions for partial differential equations.
References
Algorithms | Domain reduction algorithm | [
"Mathematics"
] | 27 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
41,918,279 | https://en.wikipedia.org/wiki/Talaromyces%20atroroseus | Talaromyces atroroseus is a species of fungus described as new to science in 2013. Found in soil and fruit, it was first identified from house dust collected in South Africa. The fungus produces a stable red pigment with no known toxins that, it is speculated, could be used in manufacturing, especially mass-produced foods.
References
Trichocomaceae
Fungi described in 2013
Fungi of Africa
Fungus species | Talaromyces atroroseus | [
"Biology"
] | 85 | [
"Fungi",
"Fungus species"
] |
41,918,550 | https://en.wikipedia.org/wiki/Zemor%27s%20decoding%20algorithm | In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman.
Zemor considered a typical class of Sipser–Spielman construction of expander codes, where the underlying graph is bipartite graph. Sipser and Spielman introduced a constructive family of asymptotically good linear-error codes together with a simple parallel algorithm that will always remove a constant fraction of errors. The article is based on Dr. Venkatesan Guruswami's course notes
Code construction
Zemor's algorithm is based on a type of expander graphs called Tanner graph. The construction of code was first proposed by Tanner. The codes are based on double cover , regular expander , which is a bipartite graph. =, where is the set of vertices and is the set of edges and = and = , where and denotes sets of vertices. Let be the number of vertices in each group, i.e, . The edge set be of size = and every edge in has one endpoint in both and . denotes the set of edges containing .
Assume an ordering on , therefore ordering will be done on every edges of for every . Let finite field , and for a word in , let the subword of the word will be indexed by . Let that word be denoted by . The subset of vertices and induces every word a partition into non-overlapping sub-words , where ranges over the elements of .
For constructing a code , consider a linear subcode , which is a code, where , the size of the alphabet is . For any vertex , let be some ordering of the vertices of adjacent to . In this code, each bit is linked with an edge of .
We can define the code to be the set of binary vectors of such that, for every vertex of , is a code word of . In this case, we can consider a special case when every edge of is adjacent to exactly vertices of . It means that and make up, respectively, the vertex set and edge set of regular graph .
Let us call the code constructed in this way as code. For a given graph and a given code , there are several codes as there are different ways of ordering edges incident to a given vertex , i.e., . In fact our code consist of all codewords such that for all . The code is linear in as it is generated from a subcode , which is linear. The code is defined as for every .
In this figure, . It shows the graph and code .
In matrix , let is equal to the second largest eigenvalue of adjacency matrix of . Here the largest eigenvalue is .
Two important claims are made:
Claim 1
. Let be the rate of a linear code constructed from a bipartite graph whose digit nodes have degree and whose subcode nodes have degree . If a single linear code with parameters and rate is associated with each of the subcode nodes, then .
Proof
Let be the rate of the linear code, which is equal to
Let there are subcode nodes in the graph. If the degree of the subcode is , then the code must have digits, as each digit node is connected to of the edges in the graph. Each subcode node contributes equations to parity check matrix for a total of . These equations may not be linearly independent.
Therefore,
, Since the value of , i.e., the digit node of this bipartite graph is and here , we can write as:
Claim 2
If is linear code of rate , block code length , and minimum relative distance , and if is the edge vertex incidence graph of a – regular graph with second largest eigenvalue , then the code has rate at least and minimum relative distance at least .
Proof
Let be derived from the regular graph . So, the number of variables of is and the number of constraints is . According to Alon - Chung, if is a subset of vertices of of size , then the number of edges contained in the subgraph is induced by in is at most .
As a result, any set of variables will be having at least constraints as neighbours. So the average number of variables per constraint is :
So if , then a word of relative weight , cannot be a codeword of . The inequality is satisfied for . Therefore, cannot have a non zero codeword of relative weight or less.
In matrix , we can assume that is bounded away from . For those values of in which is odd prime, there are explicit constructions of sequences of - regular bipartite graphs with arbitrarily large number of vertices such that each graph in the sequence is a Ramanujan graph. It is called Ramanujan graph as it satisfies the inequality . Certain expansion properties are visible in graph as the separation between the eigenvalues and . If the graph is Ramanujan graph, then that expression will become eventually as becomes large.
Zemor's algorithm
The iterative decoding algorithm written below alternates between the vertices and in and corrects the codeword of in and then it switches to correct the codeword in . Here edges associated with a vertex on one side of a graph are not incident to other vertex on that side. In fact, it doesn't matter in which order, the set of nodes and are processed. The vertex processing can also be done in parallel.
The decoder stands for a decoder for that recovers correctly with any codewords with less than errors.
Decoder algorithm
Received word :
For to do // is the number of iterations
{ if ( is odd) // Here the algorithm will alternate between its two vertex sets.
else
Iteration : For every , let // Decoding to its nearest codeword.
}
Output:
Explanation of the algorithm
Since is bipartite, the set of vertices induces the partition of the edge set = . The set induces another partition, = .
Let be the received vector, and recall that . The first iteration of the algorithm consists of applying the complete decoding for the code induced by for every . This means that for replacing, for every , the vector by one of the closest codewords of . Since the subsets of edges are disjoint for , the decoding of these subvectors of may be done in parallel.
The iteration will yield a new vector . The next iteration consists of applying the preceding procedure to but with replaced by . In other words, it consists of decoding all the subvectors induced by the vertices of . The coming iterations repeat those two steps alternately applying parallel decoding to the subvectors induced by the vertices of and to the subvectors induced by the vertices of .
Note: [If and is the complete bipartite graph, then is a product code of with itself and the above algorithm reduces to the natural hard iterative decoding of product codes].
Here, the number of iterations, is .
In general, the above algorithm can correct a code word whose Hamming weight is no more than for values of . Here, the decoding algorithm is implemented as a circuit of size and depth that returns the codeword given that error vector has weight less than .
Theorem If is a Ramanujan graph of sufficiently high degree, for any , the decoding algorithm can correct errors, in rounds ( where the big- notation hides a dependence on ). This can be implemented in linear time on a single processor; on processors each round can be implemented in constant time.''
Proof
Since the decoding algorithm is insensitive to the value of the edges and by linearity, we can assume that the transmitted codeword is the all zeros - vector. Let the received codeword be . The set of edges which has an incorrect value while decoding is considered. Here by incorrect value, we mean in any of the bits. Let be the initial value of the codeword, be the values after first, second . . . stages of decoding.
Here, , and . Here corresponds to those set of vertices that was not able to successfully decode their codeword in the round. From the above algorithm as number of unsuccessful vertices will be corrected in every iteration. We can prove that is a decreasing sequence.
In fact, . As we are assuming, , the above equation is in a geometric decreasing sequence.
So, when , more than rounds are necessary. Furthermore, , and if we implement the round in time, then the total sequential running time will be linear.
Drawbacks of Zemor's algorithm
It is lengthy process as the number of iterations in decoder algorithm takes is
Zemor's decoding algorithm finds it difficult to decode erasures. A detailed way of how we can improve the algorithm is
given in.
See also
Expander codes
Tanner graph
Linear time encoding and decoding of error-correcting codes
References
Coding theory
Error detection and correction | Zemor's decoding algorithm | [
"Mathematics",
"Engineering"
] | 1,835 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
41,919,416 | https://en.wikipedia.org/wiki/Robust%20geometric%20computation | In mathematics, specifically in computational geometry, geometric nonrobustness is a problem wherein branching decisions in computational geometry algorithms are based on approximate numerical computations, leading to various forms of unreliability including ill-formed output and software failure through crashing or infinite loops.
For instance, algorithms for problems like the construction of a convex hull rely on testing whether certain "numerical predicates" have values that are positive, negative, or zero. If an inexact floating-point computation causes a value that is near zero to have a different sign than its exact value, the resulting inconsistencies can propagate through the algorithm causing it to produce output that is far from the correct output, or even to crash.
One method for avoiding this problem involves using integers rather than floating point numbers for all coordinates and other quantities represented by the algorithm, and determining the precision required for all calculations to avoid integer overflow conditions. For instance, two-dimensional convex hulls can be computed using predicates that test the sign of quadratic polynomials, and therefore may require twice as many bits of precision within these calculations as the input numbers. When integer arithmetic cannot be used (for instance, when the result of a calculation is an algebraic number rather than an integer or rational number), a second method is to use symbolic algebra to perform all computations with exactly represented algebraic numbers rather than numerical approximations to them. A third method, sometimes called a "floating point filter", is to compute numerical predicates first using an inexact method based on floating-point arithmetic, but to maintain bounds on how accurate the result is, and repeat the calculation using slower symbolic algebra methods or numerically with additional precision when these bounds do not separate the calculated value from zero.
References
Computational geometry | Robust geometric computation | [
"Mathematics"
] | 361 | [
"Computational geometry",
"Computational mathematics",
"Geometry",
"Geometry stubs"
] |
37,725,306 | https://en.wikipedia.org/wiki/Ball-and-disk%20integrator | The ball-and-disk integrator is a key component of many advanced mechanical computers. Through simple mechanical means, it performs continual integration of the value of an input. Typical uses were the measurement of area or volume of material in industrial settings, range-keeping systems on ships, and tachometric bombsights. The addition of the torque amplifier by Vannevar Bush led to the differential analysers of the 1930s and 1940s.
Description and operation
The basic mechanism consists of two inputs and one output. The first input is a spinning disk, generally electrically driven, and using some sort of governor to ensure that it turns at a fixed rate. The second input is a movable carriage that holds a bearing against the input disk, along its radius. The bearing transfers motion from the disk to an output shaft. The axis of the output shaft is oriented parallel to the rails of the carriage. As the carriage slides, the bearing remains in contact with both the disk & the output, allowing one to drive the other.
The spin rate of the output shaft is governed by the displacement of the carriage; this is the "integration." When the bearing is positioned at the center of the disk, no net motion is imparted; the output shaft remains stationary. As the carriage moves the bearing away from the center and towards the edge of the disk, the bearing, and thus the output shaft, begins to rotate faster and faster. Effectively, this is a system of two gears with an infinitely variable gear ratio; when the bearing is nearer to the center of the disk, the ratio is low (or zero), and when the bearing is nearer to the edge, it is high.
The output shaft can rotate either "forward" or "backward," depending on the direction of the bearing's displacement; this is a useful property for an integrator.
Consider an example system that measures the total amount of water flowing through a sluice: A float is attached to the input carriage so the bearing moves up and down with the level of the water. As the water level rises, the bearing is pushed farther from the center of the input disk, increasing the output's rotation rate. By counting the total number of turns of the output shaft (for example, with an odometer-type device), and multiplying by the cross-sectional area of the sluice, the total amount of water flowing past the meter can be determined.
History
Invention and early use
The basic concept of the ball-and-disk integrator was first described by James Thomson, brother of William Thomson, 1st Baron Kelvin. William used the concept to build the Harmonic Analyser in 1886. This system was used to calculate the coefficients of a Fourier series representing inputs dialled in as the positions of the balls. The inputs were set to measured tide heights from any port being studied. The output was then fed into a similar machine, the Harmonic Synthesiser, which spun several wheels to represent the phase of the contribution from the sun and moon. A wire running along the top of the wheels took the maximum value, which represented the tide in the port at a given time. Thomson mentioned the possibility of using the same system as a way to solve differential equations, but realized that the output torque from the integrator was too low to drive the required downstream systems of pointers.
A number of similar systems followed, notably those of Leonardo Torres Quevedo, a Spanish physicist who built several machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities.
Military computers
By the turn of the 20th century, naval ships were starting to mount guns with over-the-horizon range. At these sorts of distances, spotters in the towers could not accurately estimate range by eye, leading to the introduction of ever more complex range finding systems. Additionally, the gunners could no longer directly spot the fall of their own shot, relying on the spotters to do this and relay this information to them. At the same time the speed of the ships was increasing, consistently breaking the 20 knot barrier en masse around the time of the introduction of the Dreadnought in 1906. Centralized fire control followed in order to manage the information flow and calculations, but calculating the firing proved to be very complex and error prone.
The solution was the Dreyer table, which used a large ball-and-disk integrator as a way to compare the motion of the target relative to the ship, and thereby calculate its range and speed. Output was to a roll of paper. The first systems were introduced around 1912 and installed in 1914. Over time, the Dreyer system added more and more calculators, solving for the effects of wind, corrections between apparent and real wind speed and direction based on the ships motion, and similar calculations. By the time the Mark V systems were installed on later ships after 1918, the system might have as many as 50 people operating it in concert.
Similar devices soon appeared in other navies and for other roles. The US Navy used a somewhat simpler device known as the Rangekeeper, but this also saw continual modification over time and eventually turned into a system of equal or greater sophistication to the UK versions. A similar calculator formed the basis of the Torpedo Data Computer, which solved the more demanding problem of the very long engagement times of torpedo fire.
A well-known example is the Norden bombsight which used a slight variation on the basic design, replacing the ball with another disk. In this system the integrator was used to calculate the relative motion of objects on the ground given the altitude, airspeed, and heading. By comparing the calculated output with the actual motion of objects on the ground, any difference would be due to the effects of wind on the aircraft. Dials setting these values were used to zero out any visible drift, which resulted in accurate wind measurements, formerly a very difficult problem.
Ball disk integrators were used in the analog guidance computers of ballistic missile weapon systems as late as the mid 1970s. The Pershing 1 missile system utilized the Bendix ST-120 inertial guidance platform, combined with a mechanical analog computer, to achieve accurate guidance. The ST-120 provided accelerometer information for all three axes. The accelerometer for forward movement transmitted its position to the ball position radial arm, causing the ball fixture to move away from the disk center as acceleration increased. The disk itself represents time and rotates at a constant rate. As the ball fixture moves further out from the center of the disk, the ball spins faster. The ball speed represents the missile speed, the number of ball rotations represent distance traveled. These mechanical positions were used to determine staging events, thrust termination, and warhead separation, as well as "good guidance" signals used to complete the arming chain for the warhead. The first known use of this general concept was in the V-2 missile developed by the Von Braun group at Peenemünde. See PIGA accelerometer. It was later refined at Redstone Arsenal and applied to the Redstone rocket and subsequently Pershing 1.
References
Bibliography
Mechanical computers | Ball-and-disk integrator | [
"Physics",
"Technology"
] | 1,501 | [
"Physical systems",
"Machines",
"Mechanical computers"
] |
37,730,111 | https://en.wikipedia.org/wiki/Cornish%E2%80%93Fisher%20expansion | In probability theory, the Cornish–Fisher expansion is an asymptotic expansion used to approximate the quantiles of a probability distribution based on its cumulants.
It is named after E. A. Cornish and R. A. Fisher, who first described the technique in 1937.
Definition
For a random variable X with mean μ, variance σ², and cumulants κn, its quantile yp at order-of-quantile p can be estimated as where:
where Hen is the nth probabilists' Hermite polynomial. The values γ1 and γ2 are the random variable's skewness and (excess) kurtosis respectively. The value(s) in each set of brackets are the terms for that level of polynomial estimation, and all must be calculated and combined for the Cornish–Fisher expansion at that level to be valid.
Example
Let X be a random variable with mean 10, variance 25, skew 5, and excess kurtosis of 2. We can use the first two bracketed terms above, which depend only on skew and kurtosis, to estimate quantiles of this random variable. For the 95th percentile, the value for which the standard normal cumulative distribution function is 0.95 is 1.644854, which will be x. The w weight can be calculated as:
or about 2.55621. So the estimated 95th percentile of X is 10 + 5×2.55621 or about 22.781. For comparison, the 95th percentile of a normal random variable with mean 10 and variance 25 would be about 18.224; it makes sense that the normal random variable has a lower 95th percentile value, as the normal distribution has no skew or excess kurtosis, and so has a thinner tail than the random variable X.
References
Logical expressions
Statistical deviation and dispersion
Statistical approximations
Asymptotic theory (statistics) | Cornish–Fisher expansion | [
"Mathematics"
] | 398 | [
"Mathematical logic",
"Mathematical relations",
"Statistical approximations",
"Logical expressions",
"Approximations"
] |
37,730,840 | https://en.wikipedia.org/wiki/Hessian%20automatic%20differentiation | In applied mathematics, Hessian automatic differentiation are techniques based on automatic differentiation (AD)
that calculate the second derivative of an -dimensional function, known as the Hessian matrix.
When examining a function in a neighborhood of a point, one can discard many complicated global aspects of the function and accurately approximate it with simpler functions. The quadratic approximation is the best-fitting quadratic in the neighborhood of a point, and is frequently used in engineering and science. To calculate the quadratic approximation, one must first calculate its gradient and Hessian matrix.
Let , for each the Hessian matrix is the second order derivative and is a symmetric matrix.
Reverse Hessian-vector products
For a given , this method efficiently calculates the Hessian-vector product . Thus can be used to calculate the entire Hessian by calculating , for .
The method works by first using forward AD to perform , subsequently the method then calculates the gradient of using Reverse AD to yield . Both of these two steps come at a time cost proportional to evaluating the function, thus the entire Hessian can be evaluated at a cost proportional to n evaluations of the function.
Reverse Hessian: Edge_Pushing
An algorithm that calculates the entire Hessian with one forward and one reverse sweep of the computational graph is Edge_Pushing. Edge_Pushing is the result of applying the reverse gradient to the computational graph of the gradient. Naturally, this graph has n output nodes, thus in a sense one has to apply the reverse gradient method to each outgoing node. Edge_Pushing does this by taking into account overlapping calculations.
The algorithm's input is the computational graph of the function. After a preceding forward sweep where all intermediate values in the computational graph are calculated, the algorithm initiates a reverse sweep of the graph. Upon encountering a node that has a corresponding nonlinear elemental function, a new nonlinear edge is created between the node's predecessors indicating there is nonlinear interaction between them. See the example figure on the right. Appended to this nonlinear edge is an edge weight that is the second-order partial derivative of the nonlinear node in relation to its predecessors. This nonlinear edge is subsequently pushed down to further predecessors in such a way that when it reaches the independent nodes, its edge weight is the second-order partial derivative of the two independent nodes it connects.
Graph colouring techniques for Hessians
The graph colouring techniques explore sparsity patterns of the Hessian matrix and cheap Hessian vector products to obtain the entire matrix. Thus these techniques are suited for large, sparse matrices. The general strategy of any such colouring technique is as follows.
Obtain the global sparsity pattern of
Apply a graph colouring algorithm that allows us to compact the sparsity structure.
For each desired point calculate numeric entries of the compact matrix.
Recover the Hessian matrix from the compact matrix.
Steps one and two need only be carried out once, and tend to be costly. When one wants to calculate the Hessian at numerous points (such as in an optimization routine), steps 3 and 4 are repeated.
As an example, the figure on the left shows the sparsity pattern of the Hessian matrix where the columns have been appropriately coloured in such a way to allow columns of the same colour to be merged without incurring in a collision between elements.
There are a number of colouring techniques, each with a specific recovery technique. For a comprehensive survey, see. There have been successful numerical results of such methods.
References
Differential calculus
Matrices | Hessian automatic differentiation | [
"Mathematics"
] | 706 | [
"Matrices (mathematics)",
"Mathematical objects",
"Differential calculus",
"Calculus"
] |
37,732,235 | https://en.wikipedia.org/wiki/Double%20layer%20forces | Double layer forces occur between charged objects across liquids, typically water. This force acts over distances that are comparable to the Debye length, which is on the order of one to a few tenths of nanometers. The strength of these forces increases with the magnitude of the surface charge density (or the electrical surface potential). For two similarly charged objects, this force is repulsive and decays exponentially at larger distances, see figure. For unequally charged objects and eventually at shorted distances, these forces may also be attractive. The theory due to Derjaguin, Landau, Verwey, and Overbeek (DLVO) combines such double layer forces together with Van der Waals forces in order to estimate the actual interaction potential between colloidal particles.
An electrical double layer develops near charged surfaces (or another charged objects) in aqueous solutions. Within this double layer, the first layer corresponds to the charged surface. These charges may originate from tightly adsorbed ions, dissociated surface groups, or substituted ions within the crystal lattice. The second layer corresponds to the diffuse layer, which contains the neutralizing charge consisting of accumulated counterions and depleted coions. The resulting potential profile between these two objects leads to differences in the ionic concentrations within the gap between these objects with respect to the bulk solution. These differences generate an osmotic pressure, which generates a force between these objects.
These forces are easily experienced when hands are washed with soap. Adsorbing soap molecules make the skin negatively charged, and the slippery feeling is caused by the strongly repulsive double layer forces. These forces are further relevant in many colloidal or biological systems, and may be responsible for their stability, formation of colloidal crystals, or their rheological properties.
Poisson–Boltzmann model
The most popular model to describe the electrical double layer is the Poisson-Boltzmann (PB) model. This model can be equally used to evaluate double layer forces. Let us discuss this model in the case of planar geometry as shown in the figure on the right. In this case, the electrical potential profile ψ(z) near a charged interface will only depend on the position z. The corresponding Poisson's equation reads in SI units
where ρ is the charge density per unit volume, ε0 the dielectric permittivity of the vacuum, and ε the dielectric constant of the liquid. For a symmetric electrolyte consisting of cations and anions having a charge ±q, the charge density can be expressed as
where c± = N±/V are the concentrations of the cations and anions, where N± are their numbers and V the sample volume. These profiles can be related to the electrical potential by considering the fact that the chemical potential of the ions is constant. For both ions, this relation can be written as
where is the reference chemical potential, T the absolute temperature, and k the Boltzmann constant. The reference chemical potential can be eliminated by applying the same equation far away from the surface where the potential is assumed to vanish and concentrations attain the bulk concentration cB. The concentration profiles thus become
where β = 1/(kT). This relation reflects the Boltzmann distribution of the ions with the energy ±qψ. Inserting these relations into the Poisson equation one obtains the PB equation
The potential profile between two plates is normally obtained by solving this equation numerically.
Once the potential profile is known, the force per unit area between the plates expressed as the disjoining pressure Π can be obtained as follows. The starting point is the Gibbs–Duhem relation for a two component system at constant temperature
Introducing the concentrations c± and using the expressions of the chemical potentials μ± given above one finds
The concentration difference can be eliminated with the Poisson equation and the resulting equation can be integrated from infinite separation of the plates to the actual separation h by realizing that
Expressing the concentration profiles in terms of the potential profiles one obtains
From a known electrical potential profile ψ(z) one can calculate the disjoining pressure from this equation at any suitable position z. Alternative derivation of the same relation for disjoining pressure involves the stress tensor.
Debye-Hückel model
When the electric potentials or charge densities are not too high, the PB equation can be simplified to the Debye-Hückel (DH) equation. By expanding the exponential function in the PB equation into a Taylor series, one obtains
where
The parameter κ−1 is referred to as the Debye length, and some representative values for a monovalent salt in water at 25°C with ε ≃ 80 are given in the table on the right. In non-aqueous solutions, Debye length can be substantially larger than the ones given in the table due to smaller dielectric constants. The DH model represents a good approximation, when the surface potentials are sufficiently low with respect to the limiting values
The numerical value refers to a monovalent salt and 25°C. In practice, the DH approximation remains rather accurate up to surface potentials that are comparable to the limiting values given above. The disjoining pressure can be obtained from the PB equation given above, which can also be simplified to the DH case by expanding into Taylor series. The resulting expression is
The substantial advantage of the DH model over the PB model is that the forces can be obtained analytically. Some of the relevant cases will be discussed below.
Superposition approximation
When the surfaces are sufficiently far apart, the potential profiles originating from each individual surface will not be much perturbed by the presence of the other surface. This approximation thus suggests that one can simply add (superpose) the potentials profiles originating from each surface as illustrated the figure. Since the potential profile passes through a minimum at the mid-plane, it is easiest to evaluate the disjoining pressure at the midplane. The solution of the DH equation for an isolated wall reads
where z is the distance from the surface and ψD the surface potential. The potential at the midplane is thus given by twice the value of this potential at a distance z = h/2. The disjoining pressure becomes
The electrostatic double layer force decays in an exponential fashion. Due to the screening by the electrolyte, the range of the force is given by the Debye length and its strength by the surface potential (or surface charge density). This approximation turns out to be exact provided the plate-plate separation is large compared to the Debye length and the surface potentials are low.
This result can be simply generalized to highly charged surfaces, but only at larger separations. Even if the potential is large close to the surface, it will be small at larger distances, and can be described by the DH equation. However, in this case one has to replace the actual diffuse layer potential ψD with the effective potential ψeff. Within the PB model, this effective potential can be evaluated analytically, and reads
The superposition approximation can be easily extended to asymmetric systems. Analogous arguments lead to the expression for the disjoining pressure
where the super-scripted quantities refer to properties of the respective surface. At larger distances, oppositely charged surfaces repel and equally charged ones attract.
Charge regulating surfaces
While the superposition approximation is actually exact at larger distances, it is no longer accurate at smaller separations. Solutions of the DH or PB equations in between the plates provide a more accurate picture at these conditions. Let us only discuss the symmetric situation within the DH model here. This discussion will introduce the notion of charge regulation, which suggests that the surface charge (and the surface potential) may vary (or regulate) upon approach.
The DH equation can be solved exactly for two plates. The boundary conditions play an important role, and the surface potential and surface charge density
and become functions of the surface separation h and they may differ from the corresponding quantities ψD and σ for the isolated surface. When the surface charge remains constant upon approach, one refers to the constant charge (CC) boundary conditions. In this case, the diffuse layer potential will increase upon approach. On the other hand, when the surface potential is kept constant, one refers to constant potential (CP) boundary condition. In this case, the surface charge density decreases upon approach. Such decrease of charge can be caused by adsorption of desorption of charged ions from the surface. Such variation of adsorbed species upon approach has also been referred to as proximal adsorption. The ability of the surface to regulate its charge can be quantified by the regulation parameter
where CD = ε0 ε κ is the diffuse layer capacitance and CI the inner (or regulation) capacitance. The CC conditions are found when p = 1 while the CP conditions for p = 0. The realistic case will be typically situated in between. By solving the DH equation one can show that diffuse layer potential varies upon approach as
while the surface charged density obey a similar relation
The swelling pressure can be found by inserting the exact solution of the DH equation into the expressions above and one finds
Repulsion is strongest for the CC conditions (p = 1) while it is weaker for the CP conditions (p = 0). The result of the superposition approximation is always recovered at larger distances but also for p = 1/2 at all distances. The latter fact explains why the superposition approximation can be very accurate even at small separations. Surfaces regulate their charge and not infrequently the actual regulation parameter is not far away from 1/2.
The situation is exemplified in the figure below. From stability considerations one can show that p < 1 and that this parameter may also becomes negative. These results can be extended to asymmetric case in a straightforward way.
When surface potentials are replaced by effective potentials, this simple DH picture is applicable for more highly charged surfaces at sufficiently larger distances. At shorter distances, however, one may enter the PB regime and the regulation parameter may not remain constant. In this case, one must solve the PB equation together with an appropriate model of the surface charging process. It was demonstrated experimentally that charge regulation effects can become very important in asymmetric systems.
Extensions to other geometries
Interactions between various objects were studied within the DH and PB models by many researchers. Some of the relevant results are summarized in the following.
Non-planar geometries: Objects of other than planar geometries can be treated within the Derjaguin approximation, provided their size is substantially larger than the Debye length. This approximation has been used to estimate the force between two charged colloidal particles as shown in the first figure of this article. The exponential nature of these repulsive forces and the fact that its range is given by the Debye length was confirmed experimentally by direct force measurements, including surface forces apparatus, colloidal probe technique, or optical tweezers. The interaction free energy involving two spherical particles within the DH approximation follows the Yukawa or screened Coulomb potential
where r is the center-to-center distance, Q is the particle charge, and a the particle radius. This expression is based on the superposition approximation and is only valid at large separations. This equation can be extended to more highly charged particles by reinterpreting the charge Q as an effective charge. To address the interactions in other situation, one must resort to numerical solutions of the DH or PB equation.
Non-uniform or patchy charge distribution: Interaction between surfaces with non-uniform and periodic charge distribution has been analyzed within the DH approximation. Such surfaces are referred to have a mosaic or patch-charge distribution. One important conclusion from these studies is that there is an additional attractive electrostatic contribution, which also decays exponentially. When the non-uniformities are arranged in a quadratic lattice with spacing b, the decay length q−1 of this additional attraction can be expressed as
At high salt levels, this attraction is screened as the interaction between uniformly charged surfaces. At lower salt levels, however, the range of this attraction is related to the characteristic size of the surface charge heterogeneities.
Three-body forces: The interactions between weakly charged objects are pair-wise additive due to the linear nature of the DH approximation. On the PB level, however, attractive three-body forces are present. The interaction free energy between three objects 1, 2, and 3 can be expressed as
where Fij are the pair free energies and ΔF123 is the non-additive three-body contribution. These three-body contributions were found to be attractive on the PB level, meaning that three charged objects repel less strongly than what one would expect on the basis of pair-wise interactions alone.
Beyond Poisson-Boltzmann approximation
More accurate description of double layer interactions can be put forward on the primitive model. This model treats the electrostatic and hard-core interactions between all individual ions explicitly. However, it includes the solvent only in a "primitive" way, namely as a dielectric continuum. This model was studied in much detail in the theoretical community. Explicit expressions for the forces are mostly not available, but they are accessible with computer simulations, integral equations, or density functional theories.
The important finding from these studies is that the PB description represents only a mean-field approximation. This approximation is excellent in the so-called weak coupling regime, that is for monovalent electrolytes and weakly charged surfaces. However, this description breaks down in the strong coupling regime, which may be encountered for multivalent electrolytes, highly charged systems, or non-aqueous solvents. In the strong coupling regime, the ions are strongly correlated, meaning that each ion has an exclusion hole around itself. These correlations lead to strong ion adsorption to charged surfaces, which may lead to charge reversal and crystallization of these ions on the surface. These correlations may also induce attractive forces. The range of these forces is typically below 1 nm.
Like-charge attraction controversy
Around 1990, theoretical and experimental evidence has emerged that forces between charged particles suspended in dilute solutions of monovalent electrolytes might be attractive at larger distances. This evidence is in contradiction with the PB theory discussed above, which always predicts repulsive interactions in these situations. The theoretical treatment leading to these conclusions was strongly criticized. The experimental findings were mostly based on video-microscopy, but the underlying data analysis was questioned concerning the role of impurities, appropriateness of image processing techniques, and the role of hydrodynamic interactions. Despite the initial criticism, accumulative evidence suggest that the DLVO fails to account for essential physics necessary to describe the experimental observations.
While the community remains skeptical regarding the existence of effective attractions between like-charged species, recent computer molecular dynamics simulations with an explicit description of the solvent have demonstrated that the solvent plays an important role in the structure of charged species in solution, while PB and the primitive model do not account for most of these effects. Specifically, the solvent plays a key role in the charge localization of the diffuse ions in ion-rich
domains that bring charged species closer together. Based on this idea, simulations have explained experimental trends such as the disappearance of a scattering peak in salt-free polyelectrolyte solutions and the structural inhomogeneities of charged colloidal particles/nanoparticles observed experimentally that PB and primitive model approaches fail to explain.
Relevance
Double layer interactions are relevant in a wide number of phenomena. These forces are responsible for swelling of clays. They may also be responsible for the stabilization of colloidal suspension and will prevent particle aggregation of highly charged colloidal particles in aqueous suspensions. At low salt concentrations, the repulsive double layer forces can become rather long-ranged, and may lead to structuring of colloidal suspensions and eventually to formation of colloidal crystals. Such repulsive forces may further induce blocking of surfaces during particle deposition. Double layer interactions are equally relevant for surfactant aggregates, and may be responsible to the stabilization of cubic phases made of spheroidal micelles or lamellar phases consisting of surfactant or lipid bilayers.
See also
Colloid
Debye length
DLVO theory
Debye–Hückel theory
Derjaguin approximation
Electrical double layer
Emulsion
Flocculation
Nanoparticle
Particle aggregation
Particle deposition
Poisson–Boltzmann equation
Surface charge
van der Waals force
References
Chemistry
Materials science
Colloidal chemistry | Double layer forces | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,398 | [
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Materials science",
"Colloids",
"Surface science",
"nan"
] |
37,733,253 | https://en.wikipedia.org/wiki/Rank%20of%20a%20partition | In number theory and combinatorics, the rank of an integer partition is a certain number associated with the partition. In fact at least two different definitions of rank appear in the literature. The first definition, with which most of this article is concerned, is that the rank of a partition is the number obtained by subtracting the number of parts in the partition from the largest part in the partition. The concept was introduced by Freeman Dyson in a paper published in the journal Eureka. It was presented in the context of a study of certain congruence properties of the partition function discovered by the Indian mathematical genius Srinivasa Ramanujan. A different concept, sharing the same name, is used in combinatorics, where the rank is taken to be the size of the Durfee square of the partition.
Definition
By a partition of a positive integer n we mean a finite multiset λ = { λk, λk − 1, . . . , λ1 } of positive integers satisfying the following two conditions:
λk ≥ . . . ≥ λ2 ≥ λ1 > 0.
λk + . . . + λ2 + λ1 = n.
If λk, . . . , λ2, λ1 are distinct, that is, if
λk > . . . > λ2 > λ1 > 0
then the partition λ is called a strict partition of n.
The integers λk, λk − 1, ..., λ1 are the parts of the partition. The number of parts in the partition λ is k and the largest part in the partition is λk. The rank of the partition λ (whether ordinary or strict) is defined as λk − k.
The ranks of the partitions of n take the following values and no others:
n − 1, n −3, n −4, . . . , 2, 1, 0, −1, −2, . . . , −(n − 4), −(n − 3), −(n − 1).
The following table gives the ranks of the various partitions of the number 5.
Ranks of the partitions of the integer 5
Notations
The following notations are used to specify how many partitions have a given rank. Let n, q be a positive integers and m be any integer.
The total number of partitions of n is denoted by p(n).
The number of partitions of n with rank m is denoted by N(m, n).
The number of partitions of n with rank congruent to m modulo q is denoted by N(m, q, n).
The number of strict partitions of n is denoted by Q(n).
The number of strict partitions of n with rank m is denoted by R(m, n).
The number of strict partitions of n with rank congruent to m modulo q is denoted by T(m, q, n).
For example,
p(5) = 7 , N(2, 5) = 1 , N(3, 5) = 0 , N(2, 2, 5) = 5 .
Q(5) = 3 , R(2, 5) = 1 , R(3, 5) = 0 , T(2, 2, 5) = 2.
Some basic results
Let n, q be a positive integers and m be any integer.
Ramanujan's congruences and Dyson's conjecture
Srinivasa Ramanujan in a paper published in 1919 proved the following congruences involving the partition function p(n):
p(5n + 4) ≡ 0 (mod 5)
p(7n + 5) ≡ 0 (mod 7)
p(11n + 6) ≡ 0 (mod 11)
In commenting on this result, Dyson noted that " . . . although we can prove that the partitions of 5n + 4 can be divided into five equally numerous subclasses, it is unsatisfactory to receive from the proofs no concrete idea of how the division is to be made. We require a proof which will not appeal to generating functions, . . . ". Dyson introduced the idea of rank of a partition to accomplish the task he set for himself. Using this new idea, he made the following conjectures:
N(0, 5, 5n + 4) = N(1, 5, 5n + 4) = N(2, 5, 5n + 4) = N(3, 5, 5n + 4) = N(4, 5, 5n + 4)
N(0, 7, 7n + 5) = N(1, 7, 7n + 5) = N(2, 7, 7n + 5) = . . . = N(6, 7, 7n + 5)
These conjectures were proved by Atkin and Swinnerton-Dyer in 1954.
The following tables show how the partitions of the integers 4 (5 × n + 4 with n = 0) and 9 (5 × n + 4 with n = 1 ) get divided into five equally numerous subclasses.
Partitions of the integer 4
Partitions of the integer 9
Generating functions
The generating function of p(n) was discovered by Euler and is well known.
The generating function for N(m, n) is given below:
The generating function for Q(n) is given below:
The generating function for R(m, n) is given below:
Alternate definition
In combinatorics, the phrase rank of a partition is sometimes used to describe a different concept: the rank of a partition λ is the largest integer i such that λ has at least i parts each of which is no smaller than i. Equivalently, this is the length of the main diagonal in the Young diagram or Ferrers diagram for λ, or the side-length of the Durfee square of λ.
The table of ranks (under this alternate definition) of partitions of 5 is given below.
Ranks of the partitions of the integer 5
Further reading
Asymptotic formulas for the rank partition function:
Congruences for rank function:
Generalisation of rank to BG-rank:
See also
Crank of a partition
References
Integer partitions
Arithmetic functions
Srinivasa Ramanujan | Rank of a partition | [
"Mathematics"
] | 1,302 | [
"Integer partitions",
"Arithmetic functions",
"Number theory"
] |
37,733,775 | https://en.wikipedia.org/wiki/Ocean%20Tower | Ocean Tower SPI was an unfinished, 31-story condominium in South Padre Island, Cameron County, Texas, United States, that was imploded when it was deemed unsafe to remain standing. Construction was halted in May 2008 when cracks formed in the building's supporting columns, and investigations revealed that the core of the skyscraper had sunk by more than . Though the developers initially vowed to fix the problem, studies discovered that repairs would have been too expensive, and plans for its demolition were announced in September 2009. At the time of its controlled implosion in December 2009 the building weighed , and it was the tallest reinforced concrete structure to be demolished in that way. It was nicknamed "Faulty Towers" and "The Leaning Tower of South Padre Island".
Plans
The Ocean Tower project was developed by Coastal Constructors Southwest Ventures, a subsidiary of Zachry Construction. It was designed as a 31-story luxury high-rise featuring 147 residences, a gym, swimming pools, spa, and a media room. The podium of the building was a large parking garage with the homes beginning at above sea level. The completed building would stand tall and be one of the tallest structures in the Rio Grande Valley. The building was designed to withstand extreme winds with three massively reinforced core walls. The location was to have allowed the residences to have views across the Gulf of Mexico and the Laguna Madre. Units were to retail for $2 million.
Construction
After a month of structural testing the construction of Ocean Tower began on April 5, 2006. It continued for two years with much of the main structure completed until differential settlement saw parts of the building sink by over . Pier supports in the shifting clay more than underground began buckling, stressing beams and columns, causing cracking, spalling, and breaking, eventually causing the building to lean towards the northwest corner, cracking the wall of the adjacent garage, which abuts the tower. The official explanation was that the parking garage and the tower were mistakenly built connected, forcing the weight down upon the garage instead of on the tower's core. The use of expandable clay, which compresses when weight is applied to it, compounded the issue and allowed the parking garage to remain relatively unsettled compared to the tower itself. Preliminary evaluation showed that the tower's core had sunk , while the attached parking lot had shifted less than half that distance.
Construction was halted in the summer of 2008. Soon after, the building became known as the "leaning tower of South Padre" and was viewed as a looming eyesore.
In a letter dated July 2, 2008 the developers informed buyers about the problems that they had encountered. They reassured them "Your unit will be delivered, and the building will be stronger and safer than ever", stating that completion of the construction would be delayed by "6 to 9 months". The proposed fixes would have the garage beams separated from the tower, and new columns to be placed under the beams. Once the columns had been fully braced, then garage beams would be cut away and the foundation would be repaired. By this time more than 100 of the condominiums had been sold.
On November 4, 2008, after several engineering studies had discovered that the work needed to fix the building would prevent the project from becoming economically viable, the development was officially cancelled and purchasers were released from their unit purchase agreements.
Demolition
Any materials that could be recycled or resold, including fixtures and fittings, steel, flooring, and windows, were removed from the building before demolition. The nearby Texas Park Road 100 was closed on safety grounds just before the building was set to be razed. At 9am on December 13, 2009, the building was imploded by Controlled Demolition, Inc. By the time it fell the building weighed and is reported to be the tallest and largest reinforced concrete structure ever imploded.
The implosion was watched by a large crowd, many of whom stayed in local hotels and visited restaurants in the area. Island spokesman Dan Quandt described the event as "a very good short-term economic boost for South Padre Island".
Lawsuit
The developers have filed a $125 million lawsuit against geotechnical engineering firm Raba-Kistner Engineering and Consulting of San Antonio and structural engineers Datum Engineers of Austin and Dallas.
See also
List of tallest voluntarily demolished buildings
References
Buildings and structures in Cameron County, Texas
Demolished buildings and structures in Texas
Skyscrapers in Texas
Buildings and structures demolished in 2009
Former skyscrapers
Inclined towers in the United States
Buildings and structures demolished by controlled implosion
Unfinished buildings demolished
Unfinished buildings and structures in the United States | Ocean Tower | [
"Engineering"
] | 931 | [
"Buildings and structures demolished by controlled implosion",
"Architecture"
] |
23,536,663 | https://en.wikipedia.org/wiki/CCG-4986 | CCG-4986 is a drug which is the first non-peptide compound discovered that acts as a selective inhibitor of the regulator of G protein signalling protein subtype RGS4. Regulators of G protein signalling are proteins which act to limit and shorten the response produced inside a cell following activation of a G protein-coupled receptor. Since different RGS subtypes are expressed in different tissues and are associated with particular receptors, this makes it possible for selective inhibitors of RGS proteins to be developed, which should be able to enhance the activity of a particular receptor in a defined target tissue, but not elsewhere in the body.
References
Experimental drugs
4-Chlorophenyl compounds
4-Nitrophenyl compounds | CCG-4986 | [
"Chemistry"
] | 147 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
23,537,962 | https://en.wikipedia.org/wiki/Small%20control%20property | For applied mathematics, in nonlinear control theory, a non-linear system of the form is said to satisfy the small control property if for every there exists a so that for all there exists a so that the time derivative of the system's Lyapunov function is negative definite at that point.
In other words, even if the control input is arbitrarily small, a starting configuration close enough to the origin of the system can be found that is asymptotically stabilizable by such an input.
References
Nonlinear control | Small control property | [
"Mathematics"
] | 111 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
23,537,965 | https://en.wikipedia.org/wiki/BelAZ%2075600 | The BelAZ 75600 is a series of off-highway, ultra class haul trucks developed and manufactured in Belarus by OJSC "Belarusian Autoworks" specifically for transportation of loosened rocks on technological haul roads at open-pit mining sites worldwide under different climatic conditions.
The trucks have a diesel-electric transmission. Engines are Cummins QSK78 (model 75600) or MTU 20V4000 (model 75601) generating 2610 or 2800 kW respectively.
See also
BelAZ
BelAZ 75710
Link
Official Website
References
Haul trucks
75600 | BelAZ 75600 | [
"Engineering"
] | 119 | [
"Mining equipment",
"Haul trucks"
] |
23,541,193 | https://en.wikipedia.org/wiki/National%20Centre%20for%20Physics | The National Centre for Physics is a federally funded research institute and national laboratory co-located near Quaid-i-Azam University in Pakistan
Founded in 1999, the site is dedicated for understanding and advancement of the physical sciences and mathematical logic – the site is located in Islamabad in Pakistan. It closely collaborates and operates under the quadripartite supervision of International Center for Theoretical Physics (ICTP) in Italy, CERN in Switzerland, and the Pakistan Atomic Energy Commission (PAEC).
History
Origins
Establishing world-class physics research institutes was proposed by a number of scientists. The roots of NCP institutes go back to when Nobel laureate professor Abdus Salam, after receiving his doctorate in physics, came back to Pakistan in 1951. Joining his alma mater, Government College University as Professor of Mathematics in 1951, Salam made an effort to establish the physics institute but was unable to do so. The same year, he became chairman of the Mathematics Department of the Punjab University where he tried to revolutionise the department by introducing the course of Quantum Mechanics necessary for undergraduate students, but it was soon reverted by the vice-chancellor. He soon faced the choice between intellectual death or migration to the stimulating environment of a western institutions. This choice, however, left a deep impression on him and was behind his determination to create an institution to which physicists from developing countries would come as a right to interact with their peers from industrially advanced countries without permanently leaving their own countries. This resulted in founding to the International Centre for Theoretical Physics (ICTP) by Professor Abdus Salam in Italy.
INSC and INP
In 1974, Prof. Abdus Salam visualised the need of an institution where experts from the industrialised nations and learners from the developing countries could get together for a couple of weeks once a year to exchange views on various subjects of current interest in Physics and allied sciences. His suggestion was accepted by Chairman of Pakistan Atomic Energy Commission (PAEC) Munir Ahmad Khan and it was the year 1976 when the first International Nathiagali Summer College on Physics and Contemporary Needs (INSC) was inaugurated at Nathiagali, with co-sponsorship of ICTP and PAEC, under the directorship of Prof. Riazuddin, a student of Abdus Salam. The same year, Ishfaq Ahmad established the Institute of Nuclear Physics at the University of Engineering and Technology of Lahore where Abdus Salam was invited to give first lectures on particle physics and quantum mechanics.
Since then, it has been regularly held without break.
Foundation
The National Centre for Physics came into reality when Prof. Riazuddin arranged a one-day symposium on Frontiers of Fundamental Physics on 27 January 1999 at the Institute of Physics of Quaid-e-Azam University, only seven months before the recent tests, (Chagai-I). All the leading scientists of the country and some visitors from CERN attended this symposium and they provided their support. Prof. Riazuddin being the founding father of NCP, was its first director-general, and it was inaugurated by Dr. Ishfaq Ahmad, chairman of Pakistan Atomic Energy Commission during this period, on 16 May 2000. The director general of European Organization for Nuclear Research (CERN), Dr. Luciano Maiani and distinguished members of his delegation, the vice-chancellor of Quaid-i-Azam University, Dr. Tariq Saddiqui and other dignitaries, witnessed the inauguration. The first academic faculty of this institute were included Munir Ahmad Khan, Pervez Hoodbhoy, Fiazuddin, Masud Ahmad, and Ishfaq Ahmad, who first presented their physics papers to the institutes and CERN.
In 2008, Dr. Hamid Saleem became its director-general after, his predecessor and founding father of NCP, Prof. Riazuddin, who was made lifetime director general emeritus. The vision of Prof. Riazuddin to make NCP one of the leading physics institutes of Pakistan is now being carried by Dr. Hamid Saleem.
NCP offers research in different branches of physics such as particle physics, computational physics, astrophysics, cosmology, atmospheric physics, atomic, molecular, and optical physics, chemical physics, condensed matter physics, (fluid dynamics, laser physics, mathematical physics, plasma physics·, quantum field theory, nano physics, quantum information theory.
Collaboration with CERN
NCP is collaborating with CERN in the field of experimental high-energy physics. NCP and CERN are involved in the development, testing and fabrication of 432 Resistive Plate Chambers (RPC) required for the CMS muon detector at CERN. The RPC has an excellent time resolution i.e. of the order of 1–2 nanoseconds and it will be used for the bunch tagging at LHC. At the national level, this project is a joint collaboration of NCP and PAEC, whereas at international level, NCP also collaborating with Italy, China, South Korea and US.
The RPC is a gaseous detector made using two parallel-plates of bakelite with high resistivity. Each RPC for CMS will be equipped with 96 electronic channels, which will be readout are based on 0.14 micrometre BiCMOS technology. For the complete system, number of readout channels are around 50,000. RNCP has an experimental high energy physics laboratory which is equipped with the high speed and advanced data acquisition system based on VME standards. This laboratory is used for prototyping and testing of RPCs at present.
National Centre for Physics organized a three-day Grid Technology Workshop in Islamabad, Pakistan in collaboration with European Organization for Nuclear Research (CERN), Geneva from 20 to 22 October 2003. The main objective of the workshop was to provide hands-on experience to Pakistani scientists, engineers and professionals on Grid technology.
Advanced scientific computing
For accessing and managing the LHC data novel techniques like the concept of data and computing grids are used. CERN has evolved a new project called the LHC Computing Grid (LCG). NCP is a partner of CERN in this project and it is the only LCG node in Pakistan.
International Centre for Theoretical Physics (ICTP)
NCP signed a memorandum of understanding during dr. K. R. Sreenivasan, Director ICTP's visit to Pakistan from 26 to 30 June 2005. In addition, the Centre carries out research in areas that are not covered by any institute of Physics. One such area being pursued by the Centre involves a number of activities in Experimental High-Energy Physics through a co-operative agreement with CERN in Geneva, Switzerland. Besides this, NCP has collaborations with several international institutes and universities in the field of theoretical physics including AS-ICTP, Trieste, Italy; Centre for Plasma Astrophysics (CPA), K-Leuven University, Belgium; Tokyo University, Tokyo, Japan; Ruhr University, Bochum (RUB), Germany and many others. Several research papers are published in reputed international journals each year from NCP through national and international collaborations.
Project and activities
The Synchrotron Radiation Source (SRS), now deactivated.
Grid Computing for LHC Data Analysis
Pelletron (accelerator), an electron accelerator previously known as ERLP (Energy Recovery Linac Prototype).
Tandem electrostatic accelerators, a negatively charged ion gains energy by attraction to the very high positive voltage at the geometric centre of the pressure vessel.
The New Light Source, a project which has evolved from the previous 4GLS project.
Monte Carlo Generators, an electron accelerator.
Compact Muon Solenoid
Global co-operation
RNCP and the Other independent countries have signed formal Memorandum of Understanding agreements are below:
European Commission
See also
Riazuddin (physicist)
Munir Ahmad Khan
Ishfaq Ahmad
Pervez Hoodbhoy
Pakistan Atomic Energy Commission (PAEC)
European Organization for Nuclear Research
International Centre for Theoretical Physics
Notes
Notes
References
Energy in Pakistan
Nuclear technology in Pakistan
Pakistan federal departments and agencies
Physics laboratories
Synchrotron radiation facilities
Research institutes in Pakistan
Research Centres in Pakistan
International nuclear energy organizations
Laboratories in Pakistan
Nuclear research institutes
Particle physics facilities
International research institutes
Physics research institutes
Plasma physics facilities
Neutron facilities
Constituent institutions of Pakistan Atomic Energy Commission
Nawaz Sharif administration
Mathematical institutes
1999 establishments in Pakistan
Abdus Salam
Institutes associated with CERN
Quaid-i-Azam University | National Centre for Physics | [
"Physics",
"Materials_science",
"Engineering"
] | 1,709 | [
"Nuclear research institutes",
"International nuclear energy organizations",
"Nuclear organizations",
"Plasma physics",
"Materials testing",
"Plasma physics facilities",
"Synchrotron radiation facilities"
] |
23,542,335 | https://en.wikipedia.org/wiki/Cryogenic%20gas%20plant | A cryogenic gas plant is an industrial facility that creates molecular oxygen, molecular nitrogen, argon, krypton, helium, and xenon at relatively high purity. As air is made up of nitrogen, the most common gas in the atmosphere, at 78%, with oxygen at 19%, and argon at 1%, with trace gasses making up the rest, cryogenic gas plants separate air inside a distillation column at cryogenic temperatures (about 100 K/-173 °C) to produce high purity gasses such as argon, nitrogen, oxygen, and many more with 1 ppm or less impurities. The process is based on the general theory of the Hampson-Linde cycle of air separation, which was invented by Carl von Linde in 1895.
Purpose
The main purpose of a cryogenic nitrogen plant is to provide a customer with high purity gaseous nitrogen (GAN), liquid nitrogen (LIN), liquid argon (LAR) and high purity argon PLAR at high purities, along with extracting trace gasses like krypton, xenon and helium. High purity liquid material such as oxygen or nitrogen produced by cryogenic plants is stored in a local tank and used as a strategic reserve. This liquid can be vaporised to cover peaks in demand or for use when the plant is offline. Argon, xenon and helium are usually sold to customers in high pressure tank cars or trucks directly due to the smaller volumes.
Typical cryogenic nitrogen plants range from 200ft3/hour to very large range plants with a daily capacity of 63 tonnes of nitrogen a day (as the Cantarell Field plant in Mexico).
The cryogenic air separation achieves high purity oxygen of more than 99.5%. The resulting high purity product can be stored as a liquid and/or filled into cylinders. These cylinders can even be distributed to customer in the medical sector, welding or mixed with other gases and used as breathing gas for diving. The plant also produces nitrogen which is used for ammonia
production for the fertilizer industry, float glass manufacturing, petrochemical usage, Purge gas, amine gas treatment, Bearing seal gas, and polyester manufacturing.
The resulting argon gas can be used in semiconductor manufacturing and photovoltaic manufacturing.
Plant modules
A cryogenic plant is composed of the following elements:
Warm end (W/E) container
Compressor
Air receiver
Chiller (Heat exchanger)
Pre-filter
Air purification unit (APU)
Coldbox
Main heat exchanger
Boiler
Distillation column
Expansion brake turbine
Storage
Liquid oxygen tank
Vapouriser
Filling station
How the plant works
Warm end process
Atmospheric air is roughly filtered and pressurised by a compressor, which provides the product pressure to deliver to the customer. The amount of air sucked in depends on the customer’s oxygen demand.
The air receiver collects condensate and minimises pressure drop. The dry and compressed air leaves the air to refrigerant heat exchanger with about 10°C.
To clean the process air further, there are different stages of filtration. First of all, more condensate is removed, then a coalescing filter acts as a gravity filter and finally an adsorber filled with activated carbon removes some hydrocarbons.
The last unit process in the warm end container is the thermal swing adsorber (TSA). The Air purification unit cleans the compressed process air by removing any residual water vapour, carbon dioxide and hydrocarbons. It comprises two vessels, valves and exhaust to allow the changeover of vessels. While one of the TSA beds is on stream the second one is regenerated by the waste gas flow, which is vented through a silencer into the ambient environment.
Coldbox process
The process air enters the main heat exchanger in the coldbox where it is cooled in counter flow with the waste gas stream. After leaving the main heat exchanger the process air has a temperature of about –112°C and is partly liquefied. The complete liquefaction is achieved through evaporation of cooled liquid oxygen in the boiler. After passing a purity control valve, process air enters on top of the distillation column and flows down through the packing material.
The stream of evaporated oxygen vapour in the shell of the boiler vents back into the distillation column. It rises through the column packing material and encounters the descending stream of liquid process air.
The liquid air descending down the column loses nitrogen. It becomes richer in oxygen and collects at the base of the column as pure liquid oxygen. It flows out into the boiler to the cold box liquid product valve. An on-line oxygen analyzer controls the opening of the liquid product valve to transfer pure low-pressure liquid oxygen into the storage tank.
The rising oxygen vapour becomes rich in nitrogen and argon. It leaves the column and exits the cold box at ambient temperature through the main heat exchanger as a waste gas. This waste gas provides purge gas to regenerate the TSA unit and to the cool the refrigeration turbine.
Turbines located at the base of the cold box provide refrigeration for the process. A stream of high-pressure gas from the main heat exchangers is cooled and expanded to low pressure in the turbine. This cold air returns to the waste stream of the heat exchanger to inject refrigeration. Energy removed by the turbine re-appears as heat in the turbine’s closed-cycle air-brake circuit. This heat is removed in an air-to-air cooler by waste gas from the cold box.
Storage and vaporising process
Liquid from the tank is compressed to high pressure in a cryogenic liquid pump. It is then vaporised in an ambient air evaporator to produce gaseous oxygen or nitrogen. The high-pressure gas then can pass into cylinders via the gas manifold or fed into a customer's product pipeline.
Applications
Applications for high purity oxygen
Furnace enrichment
Medical gases
Metal production
Welding
Rocket propellant oxidizer
Applications for high purity nitrogen production
Ammonia production for the fertilizer industry
Float glass manufacturing
Petrochemical
Purge gas
Blanketing/Inerting gas for tanks and reactor vessels
Amine gas treatment
Bearing seal gas
Polyester manufacturing
Applications for high purity argon
Semiconductor manufacturing
Photovoltaic manufacturing
Shielding gas in MIG or TIG welding
Applications for xenon
Space Travel
Medical Scan
Flash tube lamps and filler gas in some types of incandescent lightbulb
See also
Air separation
Cryogenics
Industrial gas
Liquefaction of gases
Liquid air
Liquid oxygen
Liquid nitrogen
References
Chemical processes
Industrial processes
Chemical plants
Gas technologies
Industrial gases
Cryogenics | Cryogenic gas plant | [
"Physics",
"Chemistry"
] | 1,357 | [
"Applied and interdisciplinary physics",
"Chemical plants",
"Cryogenics",
"Chemical processes",
"Industrial gases",
"nan",
"Chemical process engineering"
] |
26,442,836 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28voltage%29 | To help compare different orders of magnitude, the following list describes various voltage levels.
SI multiple
Notes
External links
Voltage
Voltage | Orders of magnitude (voltage) | [
"Physics",
"Mathematics"
] | 25 | [
"Physical quantities",
"Electrical systems",
"Quantity",
"Physical systems",
"Voltage",
"Wikipedia categories named after physical quantities",
"Orders of magnitude",
"Units of measurement"
] |
26,442,931 | https://en.wikipedia.org/wiki/Unimolecular%20ion%20decomposition | Unimolecular ion decomposition is the fragmentation of a gas phase ion in a reaction with a molecularity of one. Ions with sufficient internal energy may fragment in a mass spectrometer, which in some cases may degrade the mass spectrometer performance, but in other cases, such as tandem mass spectrometry, the fragmentation can reveal information about the structure of the ion.
Wahrhaftig diagram
A Wahrhaftig diagram (named after Austin L. Wahrhaftig) illustrates the relative contributions in unimolecular ion decomposition of direct fragmentation and fragmentation following rearrangement. The x-axis of the diagram represents the internal energy of the ion. The lower part of the diagram shows the logarithm of the rate constant k for unimolecular dissociation whereas the upper portion of the diagram indicates the probability of forming a particular product ion. The green trace in the lower part of the diagram indicates the rate of the rearrangement reaction given by
ABCD+ -> {AD+} + BC
and the blue trace indicates the direct cleavage reaction
ABCD+ -> {AB+} + CD
A rate constant of 106 s−1 is sufficiently fast for ion decomposition within the ion source of a typical mass spectrometer. Ions with rate constants less than 106 s−1 and greater than approximately 105 s−1 (lifetimes between 10−5 and 10−6 s) have a high probability of decomposing in the mass spectrometer between the ion source and the detector. These rate constants are indicated in the Wahrhaftig diagram by the log k = 5 and log k = 6 dashed lines.
Indicated on the rate constant plot are the reaction critical energy (also called the activation energy) for the formation of AD+, E0(AD+) and AB+, E0(AB+). These represent the minimum internal energy of ABCD+ required to form the respective product ions: the difference in the zero point energy of ABCD+ and that of the activated complex.
When the internal energy of ABCD+ is greater than Em(AD+), the ions are metastable (indicated by m*); this occurs near log k > 5. A metastable ion has sufficient internal energy to dissociate prior to detection. The energy Es(AD+) is defined as the internal energy of ABCD+ that results in an equal probability that ABCD+and AD+ leave the ion source, which occurs at near log k = 6. When the precursor ion has an internal energy equal to Es(AB+), the rates of formation of AD+ and AB+ are equal.
Thermodynamic and kinetic effects
Like all chemical reactions, the unimolecular decomposition of ions is subject to thermodynamic versus kinetic reaction control: the kinetic product forms faster, whereas the thermodynamic product is more stable. In the decomposition of ABCD+, the reaction to form AD+ is thermodynamically favored and the reaction to form AB+is kinetically favored. This is because the AD+ reaction has favorable enthalpy and the AB+ has favorable entropy.
In the reaction depicted schematically in the figure, the rearrangement reaction forms a double bond B=C and a new single bond A-D, which offsets the cleavage of the A-B and C-D bonds. The formation of AB+ requires bond cleavage without the offsetting bond formation. However, the steric effect makes it more difficult for the molecule to achieve the rearrangement transition state and form AD+. The activated complex with strict steric requirements is referred to as a "tight complex" whereas the transition state without such requirements is called a "loose complex".
See also
Metastability
RRKM theory
Transition state theory
Tandem mass spectrometry
References
Mass spectrometry
Physical chemistry | Unimolecular ion decomposition | [
"Physics",
"Chemistry"
] | 799 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Physical chemistry",
"Matter"
] |
26,444,049 | https://en.wikipedia.org/wiki/Sparrow%27s%20resolution%20limit | Sparrow's resolution limit is an estimate of the angular resolution limit of an optical instrument.
Rayleigh criterion
When a star is observed with a telescope, the light is diffracted or spread apart into an Airy disk. The resolution limit is defined as the minimum angular separation between two stars that can still be perceived as separate by an observer. The angular diameter of the Airy disk is determined by the aperture of the instrument.
Rayleigh's resolution limit is reached when the two stars are separated by the theoretical radius of the first dark interval around the Airy disk, which is larger than the disk's apparent radius, so that a distinct dark gap appears between the two disks. Most astronomers say they can still distinguish the two stars when they are closer than Rayleigh's resolution limit. Sparrow's Resolution Limit is reached when the combined light from two overlapping and equally bright Airy disks is constant along a line between the central peak brightness of the two Airy disks. However, at the Sparrow resolution limit the two Airy disks will appear to be just touching at their edges, which according to Sparrow is due to a brightness contrast response of the eye. The same reasoning applies to the resolution of two wavelengths in a spectroscope, where lines of emission or absorption will have a diffraction induced width analogous to the diameter of an Airy disk.
Sparrow's resolution limit is nearly equivalent to the theoretical diffraction limit of resolution, the wavelength of light divided by the aperture diameter, and about 20% smaller than the Rayleigh limit. For example, in a 200 mm (eight-inch) telescope, Rayleigh's resolution limit is 0.69 arc seconds, Sparrow's resolution limit is 0.54 arc seconds.
Dawes' limit
Sparrow's resolution limit was derived in 1916 from photographic experiments with simulated spectroscopic lines and is most commonly applied in spectroscopy, microscopy and photography. The Dawes resolution limit is more often used in visual double star astronomy.
Sparrow criterion
The Sparrow criterion expresses the resolution limit in term of the joint intensity curve when observing two very closely separated wavelengths of equal intensity.
They are considered resolved when the intensity at the midpoint between the peaks shows a minimum.
References
Eugene Hecht, 2002, "Optics"
Rainer Heintzmann & Gabriella Ficz, 2006, "Breaking the resolution limit in light microscopy", Briefings in Functional genomics, Vol. 5, pp 289–301.
Ariel Lipson, Stephen G. Lipson, Henry Lipson, 2010, "Optical Physics"
Optics | Sparrow's resolution limit | [
"Physics",
"Chemistry"
] | 519 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
26,446,274 | https://en.wikipedia.org/wiki/C4H7NO2 | {{DISPLAYTITLE:C4H7NO2}}
The molecular formula C4H7NO2 may refer to:
(Z)-4-Amino-2-butenoic acid
1-Aminocyclopropane-1-carboxylic acid
Azetidine-2-carboxylic acid
Diacetyl monoxime
Acetoacetamide | C4H7NO2 | [
"Chemistry"
] | 82 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
26,447,770 | https://en.wikipedia.org/wiki/Cognitive%20polyphasia | Cognitive polyphasia is where different kinds of knowledge, possessing different rationalities live side by side in the same individual or collective. From Greek: polloi "many", phasis "appearance".
In his research on popular representations of psychoanalysis in France, Serge Moscovici observed that different and even contradictory modes of thinking about the same issue often co-exist. In contemporary societies people are "speaking" medical, psychological, technical, and political languages in their daily affairs. By extending this phenomenon to the level of thought he suggests that "the dynamic co-existence—interference or specialization—of the distinct modalities of knowledge, corresponding to definite relations between man and his environment, determines a state of cognitive polyphasia".
Extension and applications
Cognitive systems do not habitually develop towards a state of consistency. Instead, judgements are based on representational terms being dominant in one field of interests, while playing a minor role in other fields; that is, thoughts tend to be locally but not globally consistent. Contemporaries in Western and non-western societies alike face a variety of situations where particular modes of reasoning fit better than others. Some are more useful in the family and in matters involving relatives, and others are more apt in situations involving political, economic, societal, religious or scientific matters. Knowledge and talking are always situated.
Scientific explanations frequently contradict everyday and common-sense based explanations. Nevertheless, people tend to apply each of the two ways of explanation in their talk depending on the audience and the particular situation. This can be observed with health related issues where Sandra Jovchelovitch and Marie-Claude Gervais have shown how members of the Chinese community attend to Western medical doctors and simultaneously apply traditional Chinese treatments. In their study on modernization processes in the educated middle-class of the city of Patna in India, Wolfgang Wagner, Gerard Duveen, Matthias Themel and Jyoti Verma showed a similar behaviour with regard to mental health. Respondents in the study were more likely to mention traditional ideas about treatment in private and family contexts while displaying "modern" psychiatric reasoning in the public.
In terms of social representation theory such contradictions highlight the role of representational systems as serving the purpose of relating, social belonging and communication in everyday life. This contrasts with science that aims at veridical representations of the world according to standards of scientific evidence. Both systems of knowledge have their own domain of validity but they are at the same time fluid enough to cross-fertilize each other in dialogical encounters.
See also
Cognitive dissonance
References
Cognition
Cognitive psychology
Interpersonal communication | Cognitive polyphasia | [
"Biology"
] | 526 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
40,483,531 | https://en.wikipedia.org/wiki/Tubular%20pinch%20effect | The tubular pinch effect is a phenomenon in fluid mechanics, which has importance in membrane technology. This effect describes a tendency for suspended particles flowing through a pipe to reach an equilibrium distribution with the region of highest concentration of particles lies between the central axis and the wall of the pipe.
Mark C. Porter first suspected that the pinch effect was responsible for the return of separated particles into the core flow by the membrane. This effect was first demonstrated in 1956 by G. Sergé and A. Silberberg. They had been working with dilute suspensions of spherical particles in pipelines. While the particle was flowing through the pipeline, it appeared to migrate away from the pipe axis and pipe wall and reach equilibrium in a radial eccentric position.
If:
then the pinch effect follows the relation:
This effect is of importance in cross-flow filtration and especially in dialysis. It is significant especially for particles with a diameter of 5 μm and for particles which follow laminar flow conditions and slows down the process of filter cake formation, which prolongs the service life and the filtering stays permanently high.
References
Munir Cheryan Handbuch Ultrafiltration B. Behr's Verlag GmbH&Co (1990)
Meyers Lexikon online 2.0
Siegfried Ripperger, Berechnungsansätze zur Crossflow-Filtration, Chemieingenieurtechnik, (1993) p. 533-540
G. Segré, A. Silberberg, Behaviour of Macroscopic Rigid Spheres in Poiseuille Flow. Part 1. Determination of Local Concentration by Statistical Analysis of Particle Passages Through Crossed Light Beams, Journal of Fluid Mechanics Digital Archive, (1962) p. 115-135
G. Segré, A. Silberberg Behaviour of Macroscopic Rigid Spheres in Poiseuille Flow. Part 2. Experimental Results and Interpretation, Journal of Fluid Mechanics Digital Archive, (1962) p. 136-157
Fluid mechanics
Piping
Membrane technology | Tubular pinch effect | [
"Chemistry",
"Engineering"
] | 403 | [
"Separation processes",
"Building engineering",
"Chemical engineering",
"Membrane technology",
"Civil engineering",
"Mechanical engineering",
"Piping",
"Fluid mechanics"
] |
46,542,792 | https://en.wikipedia.org/wiki/Stawell%20Underground%20Physics%20Laboratory | The Stawell Underground Physics Laboratory (SUPL) is a laboratory 1 km deep in the Stawell Gold Mine, located in Stawell, Shire of Northern Grampians, Victoria, Australia. Together with the planned Agua Negra Deep Experiment Site (ANDES) at the Agua Negra Pass, it is one of just two underground particle physics laboratories in the Southern Hemisphere and shall conduct research into dark matter.
The project is a collaboration between six international partners. It will be led by the University of Melbourne with the Swinburne University of Technology, the University of Adelaide, the Australian National University, the Australian Nuclear Science and Technology Organisation (ANSTO) and the Italian National Institute for Nuclear Physics.
It is expected that the project will collaborate closely with the Gran Sasso Laboratory in Italy.
Construction commenced in 2019, and though it was expected to be complete by the end of 2021 due to delays from corporate mergers it opened in August 2022.
General information
The project's Southern Hemisphere location has bearing on the possible differential detection of the putative WIMP-wind. Northern Hemisphere instruments are showing hints of a June "bump" of possible dark matter hits, which is expected given the galaxy's rotation, but it is hard to be sure that it is not a false signal due to some subtle seasonal environmental effect. A Southern Hemisphere location, on the opposite side of the Earth with its converse seasons, could help to provide valuable confirmation one way or another.
Secondly, the sundry particles (apparently from the constellation Cygnus) would have travelled through the Earth itself before reaching SUPL's instruments.
Finally, its Southern Hemisphere location also makes it potentially very sensitive to daily variation effects which would be a smoking-gun for self-interacting dark matter or dark matter with a significant stopping rate.
Inasmuch as neutrino experiments do not benefit in the same way from a Southern Hemisphere location, and IceCube is already extant, it is unlikely that any neutrino detectors will be housed at SUPL.
Funding
The first phase of the project received $1.75 million funding in the 2015 Australian federal budget. With matching funding from Victoria, construction started 2016 and was expected to be complete in 2017. However, a series of corporate mergers in 2015 and 2016 disrupted plans. The project was stalled when the new owners dismissed most of the labour force and shut down the Stawell gold mine to a "care and maintenance" state in December 2016. In December 2017, yet another new owner announced their intention to reopen the mine and were supportive of the underground laboratory, allowing hope that construction would restart.
In 2019, the project resumed. The 2019 Australian federal budget included $5 million for SUPL, and in July 2019 a memorandum of understanding between Stawell Gold Mines Pty Ltd, the Northern Grampians Shire Council, and the University of Melbourne was signed to build and operate the laboratory.
Construction
SUPL is planned to be located at a depth of , providing approximately 2900 metre water equivalent shielding against background cosmic rays. As a decline (ramp) mine, cars and trucks can be driven to the laboratory site. The laboratory will consist of a bespoke cavity of approximately 10 metres high and 10 metres wide () excavated into the rock from an existing part of the mine.
The laboratory will be divided into of clean room space for experiments, and of "dirty" loading area. A side tunnel 5 m wide and 20 m long () will house physical plant and personnel facilities.
SABRE
The first experiment planned for SUPL is SABRE (Sodium-iodide with Active Background REjection), based on 50 kg of thallium-doped sodium iodide. Two detectors will be built: one at LNGS and one at SUPL. Improving on the DAMA/LIBRA experiment, the SUPL detector implements additional features for background rejection: a 12 kL liquid scintillator veto, and a muon veto (experiment member, M. Mews, remains the expert on said muon veto). Consistent results between the two would be very strong evidence.
As of August 2022, the SABRE experiment is expected to be constructed underground in SUPL during the last months of 2022, with data collection beginning in 2023.
References
External links
SUPL and SABRE, University of Adelaide
The Stawell Underground Physics Laboratory
ARC Centre of Excellence for Particle Physics
Neutrino observatories
Underground laboratories
Research institutes in Australia
Physics laboratories
Physics beyond the Standard Model
Laboratories in Australia
2015 establishments in Australia
Stawell, Victoria | Stawell Underground Physics Laboratory | [
"Physics"
] | 923 | [
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
25,044,940 | https://en.wikipedia.org/wiki/Capacitive%20deionization | Capacitive deionization (CDI) is a technology to deionize water by applying an electrical potential difference over two electrodes, which are often made of porous carbon. In other words, CDI is an electro-sorption method using a combination of a sorption media and an electrical field to separate ions and charged particles. Anions, ions with a negative charge, are removed from the water and are stored in the positively polarized electrode. Likewise, cations (positive charge) are stored in the cathode, which is the negatively polarized electrode.
Today, CDI is mainly used for the desalination of brackish water, which is water with a low or moderate salt concentration (below 10 g/L). Other technologies for the deionization of water are, amongst others, distillation, reverse osmosis and electrodialysis. Compared to reverse osmosis and distillation, CDI is considered to be an energy-efficient technology for brackish water desalination. This is mainly because CDI removes the salt ions from the water, while the other technologies extract the water from the salt solution.
Historically, CDI has been referred to as electrochemical demineralization, "electrosorb process for desalination of water", or electrosorption of salt ions. It also goes by the names of capacitive desalination, or in the commercial literature as "CapDI".
History
In 1960 the concept of electrochemical demineralization of water was reported by Blair and Murphy. In that study, it was assumed that ions were removed by electrochemical reactions with specific chemical groups on the carbon particles in the electrodes. In 1968 the commercial relevance and long term operation of CDI was demonstrated by Reid. In 1971 Johnson and Newman introduced theory for ion transport in porous carbon electrodes for CDI and ion storage according to a capacitor mechanism. From 1990 onward, CDI attracted more attention because of the development of new electrode materials, such as carbon aerogels and carbon nanotube electrodes. In 1996, Farmer et al. also introduced the term capacitive deionization and used the now commonly abbreviation “CDI” for the first time. In 2004, Membrane Capacitive Deionization was introduced in a patent of
Andelman.
Process
Adsorption and desorption cycles
The operation of a conventional CDI system cycles through two phases: an adsorption phase where water is desalinated and a desorption phase where the electrodes are regenerated. During the adsorption phase, a potential difference over two electrodes is applied and ions are adsorbed from the water. In the case of CDI with porous carbon electrodes, the ions are transported through the interparticle pores of the porous carbon electrode to the intraparticle pores, where the ions are electrosorbed in the so-called electrical double layers (EDLs).
After the electrodes are saturated with ions, the adsorbed ions are released for regeneration of the electrodes. The potential difference between electrodes is reversed or reduced to zero. In this way, ions leave the electrode pores and can be flushed out of the CDI cell resulting in an effluent stream with a high salt concentration, the so-called brine stream or concentrate. Part of the energy input required during the adsorption phase can be recovered during this desorption step.
Ion adsorption in Electrical Double Layers
Any amount of charge should always be compensated by the same amount of counter-charge. For example, in an aqueous solution the concentration of the anions equals the concentration of cations. However, in the EDLs formed in the intraparticle pores in a carbon-based electrode, an excess of one type of ion over the other is possible, but it has to be compensated by electrical charge in the carbon matrix. In a first approximation, this EDL can be described using the Gouy-Chapman-Stern model, which distinguishes three different layers:
The porous carbon matrix, which contains the electrical charge in the carbon structure.
A Stern layer is located between the carbon matrix and the diffuse layer. The Stern-layer is a dielectric layer, i.e. it separates two layers with charge, but it does not carry any charge itself.
The diffuse layer, in which the ions compensate the electrical charge of the carbon matrix. The ions are diffusively distributed in this layer. The width of the diffuse layer can often be approximated using the Debye length, characterizing the distance for concentration of counter-ions to decrease by the factor 1/e. To illustrate this, the Debye length is about 3.1 nm at 20 °C and for a 10 mM NaCl solution. This implies that more than 95% of the electrical charge in the carbon matrix is compensated in a diffuse layer with a width of about 9 nm.
As the carbon matrix is charged, the charge has to be compensated by ionic charge in the diffuse layer. This can be done by either the adsorption of counterions, or the desorption of co-ions (ions with an equal charge sign as the one in the carbon matrix).
Besides the adsorption of ionic species due to the formation of EDLs in the intraparticle pores, ions can form a chemical bond with the surface area of the carbon particles as well. This is called specific adsorption, while the adsorption of ions in the EDLs is referred to as non-specific adsorption.
Advantages of capacitive deionization
Scalable and simple to operate
CDI has low investment and infrastructure cost, as the process discussed above does not require high pressures or temperatures, unlike membrane or thermal processes.
Low energy cost for treatment of brackish water
In CDI, the energy cost per volume of treated water scales approximately with the amount of removed salt, while in other technologies such as reverse osmosis, desalination energy scales roughly with volume of treated water. This makes CDI a viable solution for desalination of low salt content streams, or more specifically, brackish water.
Membrane capacitive deionization
By inserting two ion exchange membranes, a modified form of CDI is obtained, namely Membrane Capacitive Deionization. This modification improves the CDI cell in several ways:
Co-ions do not leave the electrodes during the adsorption phase, as described above (see Ion adsorption in Electrical Double Layers for explanation). Instead, due to the inclusion of the ion exchange membranes, these co-ions will be kept in the interparticle pores of the electrodes, which enhances the salt adsorption efficiency.
Since these co-ions cannot leave the electrodes and because the electroneutrality condition applies for the interparticle pores, extra counter-ions must pass through the ion-exchange membranes, which gives rise to a higher salt adsorption as well.
Operating MCDI at constant current mode can produce freshwater with a stable effluent concentration (see constant voltage vs. constant current for more information).
The required energy input of MCDI is lower than of CDI.
Constant voltage vs. constant current operation mode
A CDI cell can be operated in either the constant voltage or the constant current mode.
Constant voltage operation
During the adsorption phase of CDI using constant voltage operation, the salt effluent salt concentration decreases, but after a while, the effluent salt concentration increases again. This can be explained by the fact that the EDLs (in case of a carbon-based CDI system) are uncharged at the beginning of an adsorption step, which results in a high potential difference (electrical driving force on the ions) over the two electrodes. When more ions are adsorbed in the EDLs, the EDL potential increases and the remaining potential difference between the electrodes, which drives the ion transport, decreases. Because of the decreasing ion removal rate, the effluent concentration increases again.
Constant current operation
Since the ionic charge transported into the electrodes is equal to the applied electric current, applying a constant current allows a better control on the effluent salt concentration compared to the constant voltage operation mode. However, for a stable effluent salt concentration membranes should be incorporated in the cell design (MCDI), as the electric current does not only induce counter-ion adsorption, but co-ion depletion as well (see Membrane capacitive deionization vs. Capacitive deionization for an explanation).
Cell geometries
Flow-by mode
The electrodes are placed in a stack with a thin spacer area in between, through which the water flows. This is by far the most commonly used mode of operation and electrodes, which are prepared in a similar fashion as for electrical double layer capacitors with a high carbon mass loading.
Flow-through mode
In this mode, the feed water flows straight through the electrodes, i.e. the water flows directly through the interparticle pores of the porous carbon electrodes. This approach has the benefit of ions directly migrating through these pores, hence mitigating transport limitations encountered in the flow-by mode.
Flow-electrode capacitive deionization
This geometrical design is comparable to the flow-by mode with the inclusion of membranes in front of both electrodes, but instead of having solid electrodes, a carbon suspension (slurry) flows between the membranes and the current collector. A potential difference is applied between both channels of flowing carbon slurries, the so-called flow electrodes, and water is desalinated. Since the carbon slurries flow, the electrodes do not saturate and therefore this cell design can be used for the desalination of water with high salt concentrations as well (e.g. sea water, with salt concentrations of approximately 30 g/L). A discharging step is not necessary; the carbon slurries are, after leaving the cell, mixed together and the carbon slurry can be separated from a concentrated salt water stream.
Capacitive deionization with wires
The freshwater stream can be made to flow continuously in a modified CDI configuration where the anode and cathode electrode pairs are not fixed in space, but made to move cyclically from one stream, in which the cell voltage is applied and salt is adsorbed, to another stream, where the cell voltage is reduced and salt is released.
Electrode materials
For a high performance of the CDI cell, high quality electrode materials are of utmost importance. In most cases, carbon is the choice as porous electrode material. Regarding the structure of the carbon material, there are several considerations. As a high salt electrosorption capacity is important, the specific surface area and the pore size distribution of the carbon accessible for ions should be large. Furthermore, the used material should be stable and no chemical degradation of the electrode (degradation) should occur in the voltage window applied for CDI. The ions should be able to move fast through the pore network of the carbon and the conductivity of the carbon should be high. Lastly, the costs of the electrode materials are important to take into consideration.
Nowadays, activated carbon (AC) is the commonly used material, as it is the most cost efficient option and it has a high specific surface area. It can be made from natural or synthetic sources. Other carbon materials used in CDI research are, for example, ordered mesoporous carbon, carbon aerogels, carbide-derived carbons, carbon nanotubes, graphene and carbon black. Recent work argues that micropores, especially pores < 1.1 nm are the most effective for salt adsorption in CDI. In order to mitigate the drawbacks associated with mass transfer and electric double layer overlapping, and simultaneously harness the benefits of higher surface area and higher electric fields that come with microporous structure, innovative ongoing efforts have attempted to integrate the advantages of micropores and mesopores by fabricating hierarchical porous carbons (HPCs) that possess multi levels of porosities.
However, activated carbon, at only US$4/kg for commodity carbon and US$15/kg for highly purified, specially selected supercapacitor carbon, remains much cheaper than the alternatives, which cost US$50/kg or more. Larger activated carbon electrodes are much cheaper than relatively small exotic carbon electrodes, and can remove just as much salt for a given current. The performance increase from novel carbons is insufficient to motivate their use at this point, especially since virtually all CDI applications under serious near-term consideration are stationary applications, where unit size is a relatively minor consideration.
Nowadays, electrode materials based on redox-chemistry are more and more studied, such as sodium manganese oxide (NMO) and prussian blue analogues (PBA).
Energy requirements
Since the ionic content of water is demixed during a CDI adsorption cycle, the entropy of the system decreases and an external energy input is required. The theoretical energy input of CDI can be calculated as follows:
where R is the gas constant (8.314 J mol−1 K−1), T the temperature (K), Φv,fresh, the flow rate of the fresh water outflow (m3/s), Cfeed the concentration of ions in the feed water (mol/m3) and Cfresh the ion concentration in the fresh water outflow (mol/m3) of the CDI cell. α is defined Cfeed/Cfresh and β as Cfeed/Cconc, with Cconc the concentration of the ions in the concentrated outflow.
In practice, the energy requirements will be significantly higher (20 times or higher) than the theoretical energy input. Important energy requirements, which are not included in the theoretical energy requirements, are pumping, and losses in the CDI cell due to internal resistances. If MCDI and CDI are compared for the energy required per removed ion, MCDI has a lower energy requirement than CDI.
Comparing CDI with reverse osmosis of water with salt concentrations lower than 20 mM, lab-scale research shows that the energy consumption in kWh per m3 freshwater produced can be lower for MCDI than for reverse osmosis.
Large-scale CDI facilities
In 2007, a 10,000 tons per day full-scale CDI plant was built in China for improving the reclaimed water quality by ESTPURE. This project enables the reduction of total dissolved solids from 1,000 mg/L to 250 mg/L and turbidity from 10 NTU to 1 NTU, a unit indicating the cloudiness of a fluid. The water recovery can reach 75%. Electrical energy consumption level is 1 kWh/m3, and the cost for water treatment is 0.22 US dollars/m3. Some other large-scale projects can be seen from the table below.
References
External links
[FCDI Research Laboratory], Dr Dong Kook Kim's group at Korea Institute of Energy Research, Republic of Korea
Water treatment | Capacitive deionization | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,135 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
25,045,664 | https://en.wikipedia.org/wiki/Sinc%20numerical%20methods | In numerical analysis and applied mathematics, sinc numerical methods are numerical techniques for finding approximate solutions of partial differential equations and integral equations based on the translates of sinc function and Cardinal function C(f,h) which is an expansion of f defined by
where the step size h>0 and where the sinc function is defined by
Sinc approximation methods excel for problems whose solutions may have singularities, or infinite domains, or boundary layers.
The truncated Sinc expansion of f is defined by the following series:
.
Sinc numerical methods cover
function approximation,
approximation of derivatives,
approximate definite and indefinite integration,
approximate solution of initial and boundary value ordinary differential equation (ODE) problems,
approximation and inversion of Fourier and Laplace transforms,
approximation of Hilbert transforms,
approximation of definite and indefinite convolution,
approximate solution of partial differential equations,
approximate solution of integral equations,
construction of conformal maps.
Indeed, Sinc are ubiquitous for approximating every operation of calculus
In the standard setup of the sinc numerical methods, the errors (in big O notation) are known to be with some c>0, where n is the number of nodes or bases used in the methods. However, Sugihara has recently found that the errors in the Sinc numerical methods based on double exponential transformation are with some k>0, in a setup that is also meaningful both theoretically and practically and are found to be best possible in a certain mathematical sense.
Reading
References
Numerical analysis | Sinc numerical methods | [
"Mathematics"
] | 295 | [
"Applied mathematics",
"Computational mathematics",
"Applied mathematics stubs",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
25,046,581 | https://en.wikipedia.org/wiki/Bow%20and%20warp%20of%20semiconductor%20wafers%20and%20substrates | Bow and warp of semiconductor wafers and substrates are measures of the flatness of wafers.
Definitions
Bow is the deviation of the center point of the median surface of a free, un-clamped wafer from the reference plane, where the reference plane is defined by . This definition is based on now obsolete ASTM F534.
Warp is the difference between the maximum and the minimum distances of the median surface of a free, un-clamped wafer from the reference plane defined above. This definition follows ASTM F657, and ASTM F1390.
Modifications
The above definitions were developed for capacitance wafer thickness gauges such as ADE 9500, and later adopted by optical gauges.
Even though these standards are currently obsolete. They were withdrawn without replacement but are still widely used for characterization of semiconductor wafers, metal and glass substrates for MEMS devices, solar cells, and many other applications.
References
Semiconductor device fabrication | Bow and warp of semiconductor wafers and substrates | [
"Materials_science"
] | 193 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
25,050,663 | https://en.wikipedia.org/wiki/Learning%20to%20rank | Learning to rank or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems. Training data may, for example, consist of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.
Applications
In information retrieval
Ranking is a central part of many information retrieval problems, such as document retrieval, collaborative filtering, sentiment analysis, and online advertising.
A possible architecture of a machine-learned search engine is shown in the accompanying figure.
Training data consists of queries and documents matching them together with the relevance degree of each match. It may be prepared manually by human assessors (or raters, as Google calls them), who check results for some queries and determine relevance of each result. It is not feasible to check the relevance of all documents, and so typically a technique called pooling is used — only the top few documents, retrieved by some existing ranking models are checked. This technique may introduce selection bias. Alternatively, training data may be derived automatically by analyzing clickthrough logs (i.e. search results which got clicks from users), query chains, or such search engines' features as Google's (since-replaced) SearchWiki. Clickthrough logs can be biased by the tendency of users to click on the top search results on the assumption that they are already well-ranked.
Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.
Typically, users expect a search query to complete in a short time (such as a few hundred milliseconds for web search), which makes it impossible to evaluate a complex ranking model on each document in the corpus, and so a two-phase scheme is used. First, a small number of potentially relevant documents are identified using simpler retrieval models which permit fast query evaluation, such as the vector space model, Boolean model, weighted AND, or BM25. This phase is called top- document retrieval and many heuristics were proposed in the literature to accelerate it, such as using a document's static quality score and tiered indexes. In the second phase, a more accurate but computationally expensive machine-learned model is used to re-rank these documents.
In other areas
Learning to rank algorithms have been applied in areas other than information retrieval:
In machine translation for ranking a set of hypothesized translations;
In computational biology for ranking candidate 3-D structures in protein structure prediction problems;
In recommender systems for identifying a ranked list of related news articles to recommend to a user after he or she has read a current news article.
Feature vectors
For the convenience of MLR algorithms, query-document pairs are usually represented by numerical vectors, which are called feature vectors. Such an approach is sometimes called bag of features and is analogous to the bag of words model and vector space model used in information retrieval for representation of documents.
Components of such vectors are called features, factors or ranking signals. They may be divided into three groups (features from document retrieval are shown as examples):
Query-independent or static features — those features, which depend only on the document, but not on the query. For example, PageRank or document's length. Such features can be precomputed in off-line mode during indexing. They may be used to compute document's static quality score (or static rank), which is often used to speed up search query evaluation.
Query-dependent or dynamic features — those features, which depend both on the contents of the document and the query, such as TF-IDF score or other non-machine-learned ranking functions.
Query-level features or query features, which depend only on the query. For example, the number of words in a query.
Some examples of features, which were used in the well-known LETOR dataset:
TF, TF-IDF, BM25, and language modeling scores of document's zones (title, body, anchors text, URL) for a given query;
Lengths and IDF sums of document's zones;
Document's PageRank, HITS ranks and their variants.
Selecting and designing good features is an important area in machine learning, which is called feature engineering.
Evaluation measures
There are several measures (metrics) which are commonly used to judge how well an algorithm is doing on training data and to compare the performance of different MLR algorithms. Often a learning-to-rank problem is reformulated as an optimization problem with respect to one of these metrics.
Examples of ranking quality measures:
Mean average precision (MAP);
DCG and NDCG;
Precision@n, NDCG@n, where "@n" denotes that the metrics are evaluated only on top n documents;
Mean reciprocal rank;
Kendall's tau;
Spearman's rho.
DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used. Other metrics such as MAP, MRR and precision, are defined only for binary judgments.
Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric:
Expected reciprocal rank (ERR);
Yandex's pfound.
Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document.
Approaches
Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered).
Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval. He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets.
In this section, without further notice, denotes an object to be evaluated, for example, a document or an image, denotes a single-value hypothesis, denotes a bi-variate or multi-variate function and denotes the loss function.
Pointwise approach
In this case, it is assumed that each query-document pair in the training data has a numerical or ordinal score. Then the learning-to-rank problem can be approximated by a regression problem — given a single query-document pair, predict its score. Formally speaking, the pointwise approach aims at learning a function predicting the real-value or ordinal score of a document using the loss function .
A number of existing supervised machine learning algorithms can be readily used for this purpose. Ordinal regression and classification algorithms can also be used in pointwise approach when they are used to predict the score of a single query-document pair, and it takes a small, finite number of values.
Pairwise approach
In this case, the learning-to-rank problem is approximated by a classification problem — learning a binary classifier that can tell which document is better in a given pair of documents. The classifier shall take two documents as its input and the goal is to minimize a loss function . The loss function typically reflects the number and magnitude of inversions in the induced ranking.
In many cases, the binary classifier is implemented with a scoring function . As an example, RankNet adapts a probability model and defines as the estimated probability of the document has higher quality than :
where is a cumulative distribution function, for example, the standard logistic CDF, i.e.
Listwise approach
These algorithms try to directly optimize the value of one of the above evaluation measures, averaged over all queries in the training data. This is often difficult in practice because most evaluation measures are not continuous functions with respect to ranking model's parameters, and so continuous approximations or bounds on evaluation measures have to be used. For example the SoftRank algorithm. LambdaMART is a pairwise algorithm which has been empirically shown to approximate listwise objective functions.
List of methods
A partial list of published learning-to-rank algorithms is shown below with years of first publication of each method:
{|class="wikitable sortable"
! Year || Name || Type || Notes
|-
| 1989 || OPRF || 2pointwise || Polynomial regression (instead of machine learning, this work refers to pattern recognition, but the idea is the same).
|-
| 1992 || SLR || 2pointwise || Staged logistic regression.
|-
| 1994 || NMOpt || 2listwise || Non-Metric Optimization.
|-
| 1999 || MART (Multiple Additive Regression Trees)|| 2pairwise ||
|-
| 2000 || Ranking SVM (RankSVM) || 2pairwise || A more recent exposition is in, which describes an application to ranking using clickthrough logs.
|-
| 2001 || Pranking|| 1pointwise || Ordinal regression.
|-
| 2003 || RankBoost || 2pairwise ||
|-
| 2005 || RankNet || 2pairwise ||
|-
| 2006 || IR-SVM|| 2pairwise || Ranking SVM with query-level normalization in the loss function.
|-
| 2006 || LambdaRank|| pairwise/listwise || RankNet in which pairwise loss function is multiplied by the change in the IR metric caused by a swap.
|-
| 2007 || AdaRank|| 3listwise ||
|-
| 2007 || FRank || 2pairwise || Based on RankNet, uses a different loss function - fidelity loss.
|-
| 2007 || GBRank || 2pairwise ||
|-
| 2007 || ListNet || 3listwise ||
|-
| 2007 || McRank || 1pointwise ||
|-
| 2007 || QBRank || 2pairwise ||
|-
| 2007 || RankCosine|| 3listwise ||
|-
| 2007 || RankGP|| 3listwise ||
|-
| 2007 || RankRLS || 2pairwise ||
Regularized least-squares based ranking. The work is extended in
to learning to rank from general preference graphs.
|-
| 2007 || SVMmap || 3listwise ||
|-
| 2008 || LambdaSMART/LambdaMART|| pairwise/listwise || Winning entry in the Yahoo Learning to Rank competition in 2010, using an ensemble of LambdaMART models. Based on MART (1999) “LambdaSMART”, for Lambda-submodel-MART, or LambdaMART for the case with no submodel.
|-
| 2008 || ListMLE|| 3listwise || Based on ListNet.
|-
| 2008 || PermuRank|| 3listwise ||
|-
| 2008 || SoftRank|| 3listwise ||
|-
| 2008 || Ranking Refinement || 2pairwise || A semi-supervised approach to learning to rank that uses Boosting.
|-
| 2008 || SSRankBoost || 2pairwise|| An extension of RankBoost to learn with partially labeled data (semi-supervised learning to rank).
|-
| 2008 || SortNet || 2pairwise|| SortNet, an adaptive ranking algorithm which orders objects using a neural network as a comparator.
|-
| 2009 || MPBoost|| 2pairwise || Magnitude-preserving variant of RankBoost. The idea is that the more unequal are labels of a pair of documents, the harder should the algorithm try to rank them.
|-
| 2009 || BoltzRank || 3listwise || Unlike earlier methods, BoltzRank produces a ranking model that looks during query time not just at a single document, but also at pairs of documents.
|-
| 2009 || BayesRank || 3listwise || A method combines Plackett-Luce Model and neural network to minimize the expected Bayes risk, related to NDCG, from the decision-making aspect.
|-
| 2010 || NDCG Boost || 3listwise || A boosting approach to optimize NDCG.
|-
| 2010 || GBlend || 2pairwise || Extends GBRank to the learning-to-blend problem of jointly solving multiple learning-to-rank problems with some shared features.
|-
| 2010 || IntervalRank || 2pairwise & listwise ||
|-
| 2010 || CRR|| 2pointwise & pairwise || Combined Regression and Ranking. Uses stochastic gradient descent to optimize a linear combination of a pointwise quadratic loss and a pairwise hinge loss from Ranking SVM.
|-
| 2014 || LCR|| 2pairwise || Applied local low-rank assumption on collaborative ranking. Received best student paper award at WWW'14.
|-
|2015
|FaceNet
|pairwise
|Ranks face images with the triplet metric via deep convolutional network.
|-
|2016
|XGBoost
|pairwise
|Supports various ranking objectives and evaluation metrics.
|-
|2017 || ES-Rank|| listwise || Evolutionary Strategy Learning to Rank technique with 7 fitness evaluation metrics.
|-
| 2018 || DLCM || 2listwise || A multi-variate ranking function that encodes multiple items from an initial ranked list (local context) with a recurrent neural network and create result ranking accordingly.
|-
|2018
|PolyRank
|pairwise
|Learns simultaneously the ranking and the underlying generative model from pairwise comparisons.
|-
|2018 || FATE-Net/FETA-Net|| listwise || End-to-end trainable architectures, which explicitly take all items into account to model context effects.
|-
|2019
|FastAP
|listwise
|Optimizes Average Precision to learn deep embeddings.
|-
|2019
|Mulberry || listwise & hybrid || Learns ranking policies maximizing multiple metrics across the entire dataset.
|-
|2019
|DirectRanker || pairwise || Generalisation of the RankNet architecture.
|-
| 2019 || GSF || 2listwise || A permutation-invariant multi-variate ranking function that encodes and ranks items with groupwise scoring functions built with deep neural networks.
|-
|2020
|RaMBO
|listwise
|Optimizes rank-based metrics using blackbox backpropagation.
|-
|2020
|PRM|| pairwise || Transformer network encoding both the dependencies among items and the interactions between the user and items.
|-
| 2020 || SetRank || 2listwise || A permutation-invariant multi-variate ranking function that encodes and ranks items with self-attention networks.
|-
|2021
|PiRank|| listwise || Differentiable surrogates for ranking able to exactly recover the desired metrics and scales favourably to large list sizes, significantly improving internet-scale benchmarks.
|-
|2022
|SAS-Rank
|listwise
|Combining Simulated Annealing with Evolutionary Strategy for implicit and explicit learning to rank from relevance labels.
|-
|2022
|VNS-Rank
|listwise
|Variable Neighborhood Search in 2 Novel Methodologies in AI for Learning to Rank.
|-
|2022
|VNA-Rank
|listwise
|Combining Simulated Annealing with Variable Neighbourhood Search for Learning to Rank.
|-
|2023
|GVN-Rank
|listwise
|Combining Gradient Ascent with Variable Neighbourhood Search for Learning to Rank.
|}
Note: as most supervised learning-to-rank algorithms can be applied to pointwise, pairwise and listwise case, only those methods which are specifically designed with ranking in mind are shown above.
History
Norbert Fuhr introduced the general idea of MLR in 1992, describing learning approaches in information retrieval as a generalization of parameter estimation; a specific variant of this approach (using polynomial regression) had been published by him three years earlier. Bill Cooper proposed logistic regression for the same purpose in 1992 and used it with his Berkeley research group to train a successful ranking function for TREC. Manning et al. suggest that these early works achieved limited results in their time due to little available training data and poor machine learning techniques.
Several conferences, such as NeurIPS, SIGIR and ICML have had workshops devoted to the learning-to-rank problem since the mid-2000s (decade).
Practical usage by search engines
Commercial web search engines began using machine-learned ranking systems since the 2000s (decade). One of the first search engines to start using it was AltaVista (later its technology was acquired by Overture, and then Yahoo), which launched a gradient boosting-trained ranking function in April 2003.
Bing's search is said to be powered by RankNet algorithm, which was invented at Microsoft Research in 2005.
In November 2009 a Russian search engine Yandex announced that it had significantly increased its search quality due to deployment of a new proprietary MatrixNet algorithm, a variant of gradient boosting method which uses oblivious decision trees. Recently they have also sponsored a machine-learned ranking competition "Internet Mathematics 2009" based on their own search engine's production data. Yahoo has announced a similar competition in 2010.
As of 2008, Google's Peter Norvig denied that their search engine exclusively relies on machine-learned ranking. Cuil's CEO, Tom Costello, suggests that they prefer hand-built models because they can outperform machine-learned models when measured against metrics like click-through rate or time on landing page, which is because machine-learned models "learn what people say they like, not what people actually like".
In January 2017, the technology was included in the open source search engine Apache Solr. It is also available in the open source OpenSearch and the source-available Elasticsearch. These implementations make learning to rank widely accessible for enterprise search.
Vulnerabilities
Similar to recognition applications in computer vision, recent neural network based ranking algorithms are also found to be susceptible to covert adversarial attacks, both on the candidates and the queries. With small perturbations imperceptible to human beings, ranking order could be arbitrarily altered. In addition, model-agnostic transferable adversarial examples are found to be possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations.
Conversely, the robustness of such ranking systems can be improved via adversarial defenses such as the Madry defense.
See also
Content-based image retrieval
Multimedia information retrieval
Image retrieval
Triplet loss
References
External links
Competitions and public datasets
LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval
Yandex's Internet Mathematics 2009
Yahoo! Learning to Rank Challenge
Microsoft Learning to Rank Datasets
Information retrieval techniques
Machine learning
Ranking functions | Learning to rank | [
"Engineering"
] | 4,122 | [
"Artificial intelligence engineering",
"Machine learning"
] |
25,050,762 | https://en.wikipedia.org/wiki/Suparnostic | suPARnostic is a simplified double monoclonal antibody sandwich enzyme-linked immunosorbent assay (ELISA) that measures the amount of soluble urokinase plasminogen activator receptor (suPAR) in blood. Elevated plasma suPAR levels have been observed in various infectious, inflammatory and autoimmune diseases. suPAR concentration positively correlates to the activation level of the immune system. suPARnostic can be used as a prognostic tool to determine the severity of a disease within a patient, but is not used as a reliable diagnostic tool, as it can detect the severity of the immune response in a patient, but does not reveal the specific disease from which the patient may be suffering. Recently, increase suPAR levels were shown to be associated with increased risk of systemic inflammatory response syndrome (SIRS)/sepsis, cardiovascular disease, type 2 diabetes, infectious diseases, HIV, cancer tuberculosis, malaria, bacterial and viral CNS infections, rheumatoid arthritis, multiple sclerosis and mortality in the general population.
Performing the suPARnostic ELISA
Performing the suPARnostic ELISA requires two antibodies with high specificity for suPAR. The blood plasma sample from the patient that contains an unknown amount of suPAR is immobilized on the microwells on the clear microtiter plate and a detection antibody forms a complex with suPAR.
Between each step the plate is rinsed with a wash buffer to dispose of any proteins that do not specifically bind to any of the wells on the plate. After the final wash step, the plate is developed by adding the TMB substrate to produce a visible signal, which indicates the quantity of suPAR in the sample. The measured absorbance can, based on the values from the standard curve, be converted to the concentration (ng/mL) of suPAR in the sample. This level can then suggest whether or not the patient is experiencing challenges to their immune system.
Principles
The suPARnostic ELISA is a simplified double monoclonal antibody sandwich assay that measures the level of suPAR and suPARII-III in the body . The suPARnostic ELISA utilizes monoclonal mouse and rat antibodies against human suPAR.
The advantages of using monoclonal antibodies compared to using polyclonal antibodies includes: High homogeneity, absence of nonspecific antibodies and no batch-to-batch or lot-to-lot variability. This results in a very robust and reliable assay.
A 'sandwich' is formed of solid-phase antibody, suPAR and peroxidase-conjugated antibody. The concentration (ng/mL plasma) of suPAR in the patient sample is determined via interpolation, based on a calibration curve prepared from seven suPAR standards. Recombinant suPAR standards are calibrated against healthy human blood donor samples. Absorbance is measured using a microtiter plate reader, at 450 nm with a 650 nm reference filter. Measurement of suPAR levels from blood samples provides greater accuracy and precision than measurement from urine or cerebral spinal fluid. suPAR level is not changed by transient illness such as cold. It also remains stable after a blood sample is taken despite storage.
suPARnostic measurements between 0.1 and 4.0 ng/mL suggest that a patient is healthy, with no challenges to their immune system and no signs or symptoms of an opportunistic infection or inflammation; the average level among the population is 3.4 ng/mL. However, a patient's immune system can be considered 'negatively activated' at suPAR levels above 4.0 and up to 6.0 ng/mL, indicating a potential infection or high level of inflammation. In this case, a patient's health is likely to worsen and he or she should be referred for further testing. suPARnostic measurements from 6.0 ng/mL to double digit levels can indicate a serious illness that is progressing rapidly to a critical situation. Patients in the intensive care unit average a level of 10.0 ng/mL. There is no difference in suPAR levels intrinsic to various races; however, the scale varies for male and female.
There are two suPARnostic tests available. The suPARnostic Standard ELISA (Code No. A001) is for research use and large trials, one batch consisting of 41 samples in doublets. The suPARnostic Flex ELISA (Code No. A002) has been developed for clinical applications consisting of 93 samples, is modular and flexible, and gives fully quantitative results in 2 hours.
Practical Considerations
The suPARnostic kit has a refrigerated shelf life of several years and when frozen, may be kept for longer. The kit should sit at room temperature for half an hour before use but it may be held at room temperature for as long as three to four hours. The suPARnostic Flex ELISA (Code No. A002) is able to provide fully quantitative results in 2 hours. suPARnostic is run as large, batch test with up to 41 samples in doublets for research purposes or 93 samples for clinical use at one time.
Although suPARnostic currently does not have FDA approval, it is CE/IVD marked for distribution throughout Europe. suPAR is a prognostic test to indicate general health, and it cannot be used as a diagnostic tool to suggest a particular illness. suPAR cannot be used in the detection of brain tumors because the suPAR molecule cannot migrate through the blood brain barrier.
References
Blood tests | Suparnostic | [
"Chemistry"
] | 1,123 | [
"Blood tests",
"Chemical pathology"
] |
25,053,711 | https://en.wikipedia.org/wiki/Adexa | ADEXA is the German trade union for all pharmaceutical employees, and also for trainees and students. ADEXA negotiates the salaries and working conditions in German public pharmacies with the employers’ federations. The headquarters of the trade union is in Hamburg.
Pharmaceutical employees being organized in this trade union are entitled to all agreed conditions of pay and contract terms such as overtime pay, bonuses and extra holiday entitlement.
For their members, ADEXA offers also legal advice and protection, lobbying, media representation and information concerning occupational politics.
History
In 1949, the Tarifgemeinschaft deutscher angestellter Apotheker was founded. In 1954, it was dissolved, and a new organisation, the Bundesverband der Angestellten in Apotheken, was founded as its successor. In 2004, it was renamed ADEXA. In 2012, ADEXA joined EPhEU (the European Association of Employed community Pharmacists in Europe. EPhEU is a network representing the interests of employed community pharmacists.
References
External links
Official homepage
Introduction in English on the ADEXA homepage
German Trade Union Confederation
Trade unions established in 1954
Organisations based in Hamburg
Pharmaceutical industry | Adexa | [
"Chemistry",
"Biology"
] | 252 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
39,190,148 | https://en.wikipedia.org/wiki/Triphosphorus%20pentanitride | Triphosphorus pentanitride is an inorganic compound with the chemical formula . Containing only phosphorus and nitrogen, this material is classified as a binary nitride. While it has been investigated for various applications this has not led to any significant industrial uses. It is a white solid, although samples often appear colored owing to impurities.
Synthesis
Triphosphorus pentanitride can be produced by reactions between various phosphorus(V) and nitrogen anions (such as ammonia and sodium azide):
The reaction of the elements is claimed to produce a related material. Similar methods are used to prepared boron nitride (BN) and silicon nitride (); however the products are generally impure and amorphous.
Crystalline samples have been produced by the reaction of ammonium chloride and hexachlorocyclotriphosphazene or phosphorus pentachloride.
has also been prepared at room temperature, by a reaction between phosphorus trichloride and sodium amide.
Reactions
is thermally less stable than either BN or , with decomposition to the elements occurring at temperatures above 850 °C:
It is resistant to weak acids and bases, and insoluble in water at room temperature, however it hydrolyzes upon heating to form the ammonium phosphate salts and .
Triphosphorus pentanitride reacts with lithium nitride and calcium nitride to form the corresponding salts of and . Heterogenous ammonolyses of triphosphorus pentanitride gives imides such as and . It has been suggested that these compounds may have applications as solid electrolytes and pigments.
Structure and properties
Several polymorphs are known for triphosphorus pentanitride. The alpha‑form of triphosphorus pentanitride (α‑) is encountered at atmospheric pressure and exists at pressures up to 11 GPa, at which point it converts to the gamma‑variety (γ‑) of the compound. Upon heating γ‑ to temperatures above 2000 K at pressures between 67 and 70 GPa, it transforms into δ-. The release of pressure on the δ- polymorph does not revert it back into γ‑ or α‑. Instead, at pressures below 7 GPa, δ- converts into a fourth form of triphosphorus pentanitride, α′‑.
The structure of all polymorphs of triphosphorus pentanitride was determined by single crystal X-ray diffraction. α‑ and α′‑ are formed of a network structure of tetrahedra with 2- and 3-coordinated nitrides, γ‑ is composed of both and polyhedra while δ- is composed exclusively of corner- and edge-sharing octahedra. δ- is the most incompressible triphosphorus pentanitride, having a bulk modulus of 313 GPa.
Potential applications
Triphosphorus pentanitride has no commercial applications, although it found use as a gettering material for incandescent lamps, replacing various mixtures containing red phosphorus in the late 1960s. The lighting filaments are dipped into a suspension of prior to being sealed into the bulb. After bulb closure, but while still on the pump, the lamps are lit, causing the to thermally decompose into its constituent elements. Much of this is removed by the pump but enough vapor remains to react with any residual oxygen inside the bulb. Once the vapor pressure of is low enough, either filler gas is admitted to the bulb prior to sealing off or, if a vacuum atmosphere is desired, the bulb is sealed off at that point. The high decomposition temperature of allows sealing machines to run faster and hotter than was possible using red phosphorus.
Related halogen containing cyclic polymers, trimeric hexabromophosphazene (melting point 192 °C) and tetrameric octabromophosphazene (melting point 202 °C) find similar lamp gettering applications for tungsten halogen lamps, where they perform the dual processies of gettering and precise halogen dosing.
Triphosphorus pentanitride has also been investigated as a semiconductor for applications in microelectronics, particularly as a gate insulator in metal-insulator-semiconductor devices.
As a fuel in pyrotechnic obscurant mixtures, it offers some benefits over the more commonly used red phosphorus, owing mainly to its higher chemical stability. Unlike red phosphorus, can be safely mixed with strong oxidizers, even potassium chlorate. While these mixtures can burn up to 200 times faster than state-of-the-art red phosphorus mixtures, they are far less sensitive to shock and friction. Additionally, is much more resistant to hydrolysis than red phosphorus, giving pyrotechnic mixtures based on it greater stability under long-term storage.
Patents have been filed for the use of triphosphorus pentanitride in fire fighting measures.
See also
Polyphosphazene
Phosphorus mononitride
References
Nitrides
Inorganic phosphorus compounds
Solids | Triphosphorus pentanitride | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,045 | [
"Inorganic compounds",
"Phases of matter",
"Condensed matter physics",
"Solids",
"Inorganic phosphorus compounds",
"Matter"
] |
39,192,470 | https://en.wikipedia.org/wiki/Soluble%20adenylyl%20cyclase | Soluble adenylyl cyclase (sAC) is a regulatory cytosolic enzyme present in almost every cell. sAC is a source of cyclic adenosine 3’,5’ monophosphate (cAMP) – a second messenger that mediates cell growth and differentiation in organisms from bacteria to higher eukaryotes. sAC differentiates from the transmembrane adenylyl cyclase (tmACs) – an important source of cAMP; in that sAC is regulated by bicarbonate anions and it is dispersed throughout the cell cytoplasm. sAC has been found to have various functions in physiological systems different from that of the tmACs.
Genomic context and summary
sAC is encoded in a single Homo sapiens gene identified as ADCY10 or Adenylate cyclase 10 (soluble). This gene packed down 33 exons that comprise greater than 100kb; though, it seems to utilize multiple promoters, and its mRNA undergoes extensive alternative splicing.
Structure
The functional mammalian sAC consist of two heterologous catalytic domains (C1 and C2), forming the 50 kDa amino terminus of the protein. The additional ~140 kDa C terminus of the enzyme includes an autoinhibitory region, canonical P-loop, potential heme-binding domain, and leucine zipper-like sequence, which are a form of putative regulatory domains.
A truncated form of the enzyme only includes the C1 and C2 domains and it is refers to as the minimal functional sAC variant. This sAC-truncated form has cAMP-forming activity much higher than its full-length type. These sAC variants are stimulated by HCO3- and respond to all known selective sAC inhibitors. Crystal structures of this sAC variant comprising only the catalytic core, in apo form and in as complex with various substrate analogs, products, and regulators, reveal a generic Class III AC architecture with sAC-specific features. The structurally related domains C1 and C2 form the typical pseudo-heterodimer, with one active site. The pseudo-symmetric site accommodates the sAC-specific activator HCO3−, which activates by triggering a rearrangement of Arg176, a residue connecting both sites. The anionic sAC inhibitor 4,4′-diisothiocyanatostilbene-2,2′-disulfonic acid (DIDS) acts as a blocker for the entrance to active site and bicarbonate binding pocket.
Activation by bicarbonate (HCO−3) and calcium (Ca2+)
The binding and cyclizing of adenosine 5’ triphosphate (ATP) to the catalytic active site of the enzyme is coordinated by two metal cations. The catalytic activity of sAC is increase by the presence of manganese [Mn2+]. sAC magnesium [Mg2+] activity is regulated by calcium [Ca2+] which increases the affinity for ATP of mammalian sAC. In addition, bicarbonate [HCO−3] releases ATP-Mg2+ substrate inhibition and increases Vmax of the enzyme.
The open conformation state of sAC is reached when ATP, with Ca2+ bound to its γ-phosphate binds with specific residues in the catalytic center of the enzyme. When the second metal – a Mg2+ ion – binds to the α-phosphate of ATP leads to a conformational change of the enzyme: the close state. The change in conformation from open to close state induces esterification of the α-phosphate with the ribose in adenosine and the release of the β- and γ-phosphates, this leads to cyclizing. Hydrogencarbonate stimulates the enzyme’s Vmax by promoting the allosteric change that leads to active site closure, recruitment of the catalytic Mg2+ ion, and readjustment of the phosphates in the bound ATP. The activator bicarbonate binds to a site pseudo-symmetric to the active site and triggers conformational changes by recruiting Arg176 from the active site (see above - "structure"). Calcium increases substrate affinity by replacing the magnesium in the ion B site, which provides an anchoring point for the beta- and gamma-phosphates of the ATP substrate.
Sources of bicarbonate (HCO−3)and calcium (Ca2+)
bicarbonate derived from carbonic anhydrase (CA)-dependent hydration.
CO2 metabolism
Enters through membrane-transporting proteins or cystic fibrosis transmembrane conductance regulators.
Calcium enters by voltage-dependent Ca2+ channels or by release from the endoplasmic reticulum.
Hydrogencarbonate and calcium activates sAC in the nucleus.
sAC inside mitochondria is activated by metabolically generated CO2 through carbonic anhydrase.
Physiological effects
Brain and nervous system
Astrocytes express several sAC splice variants, which are involved in metabolic coupling between neurons and astrocytes. Increase of potassium [K+] in the extracellular space caused by neuronal activity depolarizes the cell membrane of nearby astrocytes and facilitates the entry of hydrogencarbonate through Na+/HCO−3- cotransporters. The increase in cytosolic hydrogencarbonate activates sAC; the result of this activation is the release of lactate for use as energy source by the neurons.
Bone
Numerous sAC splice variants are present in osteoclast and osteoblasts, and mutation in the human sAC gene is associated with low spinal density. Calcification by osteoblasts is intrinsically related with bicarbonate and calcium. Bone density experiments in mouse calvaria cultured indicates that HCO−3-sensing sAC is a physiological appropriate regulator of bone formation and/or reabsorption.
Sperm
sAC activation by bicarbonate is necessary for motility and other aspects of capacitation in the spermatozoa of mammals. In human males, mutations in the ADCY10 gene that lead to the inactivation of sAC have been linked to cases of sterility. Due to this essential role in male fertility, sAC has been explored as a potential target for non-hormonal male contraception.
References
Further reading
Signal transduction
Cell signaling
G protein-coupled receptors
Protein kinases | Soluble adenylyl cyclase | [
"Chemistry",
"Biology"
] | 1,293 | [
"G protein-coupled receptors",
"Neurochemistry",
"Biochemistry",
"Signal transduction"
] |
39,194,658 | https://en.wikipedia.org/wiki/Sand%20engine | The sand engine or sand motor () is a type of beach nourishment where a large volume of sediment is added to a coast. The natural forces of wind, waves and tides then distribute the sand along the coast over many years, preventing the need for repetitive beach nourishment. The method is expected to be more cost effective and also reduces the repeated ecological disturbances caused by replenishment.
The first sand engine was constructed off South Holland in the Netherlands. A 128 ha hook-shaped peninsula was created between Ter Heijde and Kijkduin in 2011 at the request of the Hoogheemraadschap van Delfland.
Building with nature
The sand engine differs significantly from previous beach nourishment strategies. Traditionally, shoreface nourishments consist of 1-2 million m3 of sand and these projects usually only last for 3-5 years before they need repeating. For the first sand engine, an order of magnitude more sand was used and it is expected to last many times longer. By depositing large amounts of sand in one go, the process can be carried out only every 10-20 years as opposed to 3-5-year cycles of traditional nourishments. This reduces disturbances to the seabed. Even though the initial local perturbation is quite large, the Dutch example shows that ecological stress is limited to the location of the nourishment and, over time, it stimulates the emergence of a large variety of animal and plant species. The available space for ecosystems also increases over time.
Professor Marcel Stive is considered the inventor of the sand engine.
Original Sand Engine
The first sand engine of its kind was constructed at Ter Heijde in the Netherlands, costing 70 million euros, and was named DeltaDuin in Dutch. Work began in January 2011 and conditions were favorable so the operation was completed in October 2011. Joop Atsma, State Secretary for Infrastructure and Environment, presented the project in November 2011 and his purpose was to convince that the technique could be useful on more locations along the Dutch coast.
A volume of 21.5 million m3 of sand, dredged from 5-10 km offshore, covered an area of 128 ha, spanning 2.4 km along the coastline and extending up to 1 km offshore. The sand was deposited in the form of a hook-shaped peninsula. Wind, wave and tide action were allowed to distribute the sand further. The project was designed to have a lifespan of up to 20 years, however in 2016 it was concluded that it would last even longer than expected. Model projections indicate that approximately 200 ha of beach area will be gained.
Bringing underwater sand to the surface has enabled beachcombers to find artifacts and remains of the prehistoric inhabitants of now-submerged Doggerland.
Their original context however is lost.
A similar project could provide a solution between Camperduin and Petten, which is called the Hondsbossche seawall.
Bacton Gas Terminal
In 2019 a sand engine was constructed to protect the Bacton Gas Terminal and surrounding area in Norfolk, United Kingdom, shifting two million cubic meters of sand.
References
External links
Website Zandmotor
Taming the floods, Dutch-style, Guardian, May 19 2014
Coastal engineering
Erosion
Sand
Dutch inventions
2012 establishments in the Netherlands | Sand engine | [
"Engineering"
] | 665 | [
"Coastal engineering",
"Civil engineering"
] |
39,196,397 | https://en.wikipedia.org/wiki/Deprescribing | Deprescribing is a process of tapering or stopping medications to achieve improved health outcomes by reducing exposure to medications that are potentially either harmful or no longer required. Deprescribing is important to consider with changing health and care goals over time, as well as polypharmacy and adverse effects. Deprescribing can improve adherence, cost, and health outcomes but may have adverse drug withdrawal effects. More specifically, deprescribing is the planned and supervised process of intentionally stopping a medication or reducing its dose to improve the person's health or reduce the risk of adverse side effects. Deprescribing is usually done because the drug may be causing harm, may no longer be helping the patient, or may be inappropriate for the individual patient's current situation. Deprescribing can help correct polypharmacy and prescription cascade.
Deprescribing is often done with people who have multiple long-term conditions (multimorbidity), older people, and people who have a limited life expectancy. In all of these situations, certain medications may contribute to an increased risk of adverse events, and people may benefit from a reduction in the amount of medication taken. Deprescribing aims to reduce medication burden and harm while maintaining or improving quality of life. "Simply because a patient has tolerated a therapy for a long duration does not mean that it remains an appropriate treatment. Thoughtful review of a patient's medication regimen in the context of any changes in medical status and potential future benefits should occur regularly, and those agents that may no longer be necessary should be considered for a trial of medication discontinuation."
The process of deprescribing is usually planned and supervised by healthcare professionals. To some, the definition of deprescribing includes only completely stopping a medication, while to others, deprescribing also includes dose reduction, which can improve quality of life (minimize side effects) while maintaining benefits.
History
The world’s first published use of the term “deprescribing” was described in 2003 by Michael Woodward in his article titled ‘Deprescribing: Achieving Better Health Outcomes for Older People through Reducing Medications.' It was published in the Society of Hospital Pharmacists of Australia's flagship Journal of Pharmacy Practice and Research (JPPR).’
Demographics
Older people are the heaviest users of medications and frequently take five or more medications (polypharmacy). Polypharmacy is associated with increased risks of adverse events, drug interactions, falls, hospitalization, cognitive deficits, and mortality. These effects are particularly seen in high-risk prescribing. Thus, optimizing medication through targeted deprescribing is a vital part of managing chronic conditions, avoiding adverse effects and improving outcomes.
Evidence base
Deprescribing is considered a potential intervention with reported safety and feasibility. For a wide range of medications, including diuretics, blood pressure medication, sedatives, antidepressants, benzodiazepines and nitrates, adverse effects of deprescribing are rare. While deprescribing has been shown to result in fewer medications, it is less certain if deprescribing is associated with significant changes in health outcomes. Although it might be possible and safe to reduce the number of medicines that people use, reversing the potential harms associated with polypharmacy may not always be achievable. Early evidence suggested that deprescribing may reduce premature death, leading to calls to undertake a double-blind study. A placebo-controlled, double-blind, randomized controlled trial was published in 2023. This study undertook deprescribing in people over 65 years living in residential aged care. It found no change in mortality and that, if implemented in all residential aged care facilities across Australia, it could save up to $16 million annually.
Deprescribing medications may improve patient function, generate a higher quality of life, and reduce bothersome signs and symptoms. Deprescribing has been shown to reduce the number of falls people experience but not to change the risk of having the first fall. Most health outcomes remain unchanged as an effect of deprescribing. The absence of a change has been viewed as a positive outcome, as the medications can often be safely withdrawn without altering health outcomes. This absence of an effect means that older people may not miss out on potentially beneficial effects of using medications due to deprescribing.
Targeted deprescribing can improve adherence to other drugs. Deprescribing can reduce the complexity of medication schedules. Complicated schedules are difficult for people to follow correctly.
The product information provided by drug companies provides much information on how to start medications and what to expect when using them. However, it provides little information on when and how to stop medications. Research into deprescribing is accumulating, with two papers showing a rapid acceleration in using the word since 2015.
In people with multiple long-term conditions and polypharmacy, deprescribing represents a complex challenge as clinical guidelines are usually developed for single conditions. In these cases, tools and guidelines like the Beers Criteria and STOPP/START could be used safely by clinicians, but not all patients might benefit from stopping their medication. There is a need for clarity about how much clinicians can do beyond the guidelines and the responsibility they need to take could help them prescribing and deprescribing for complex cases. Further factors that can help clinicians tailor their decisions to the individual are: access to detailed data on the people in their care (including their backgrounds and personal medical goals), discussing plans to stop a medicine already when it is first prescribed, and a good relationship that involves mutual trust and regular discussions on progress. Furthermore, longer appointments for prescribing and deprescribing would allow time to explain the process, explore related concerns, and support making the right decisions.
A review analysed way to improve deprescribing in primary care. It concluded that clearly defined roles and responsibilities, with good communication between multidisciplinary team members, and pharmacists integrated within teams could aid deprescribing. Routine discussions about deprescribing when prescribing, with medication reviews tailored to patients’ needs and preferences could also help. Patients and informal carers should be involved in decisions, and trusted relationships should be built up with professionals allowing continuity of care. Clinicians would also benefit from training and education on deprescribing.
Risks
It is possible for the patient to develop adverse drug withdrawal events (ADWE). These symptoms may be related to the original reason why the medication was prescribed, to withdrawal symptoms or to underlying diseases that medications have masked. For some medications, ADWEs can generally be minimized or avoided by tapering the dose slowly and carefully monitoring for symptoms. Prescribers should be aware of which medications usually require tapering (such as corticosteroids and benzodiazepines) and which can be safely stopped suddenly (such as antibiotics and nonsteroidal anti-inflammatory drugs).
Monitoring
Deprescribing requires detailed follow-up and monitoring, not unlike the attention required when starting a new medication. It is recommended that prescribers frequently monitor "relevant signs, symptom, laboratory or diagnostic tests that were the original indications for starting the medication," as well as for potential withdrawal effects. The recommended schedule for monitoring during deprescribing is at two-week intervals.
Resources to support deprescribing
Implicit tools
Several tools have been published to inform prescribers of inappropriate medications for various patient groups. The most common deprescribing algorithm is validated and has been tested in two RCTs. It is available for clinicians to identify medications that can be deprescribed. It prompts clinicians to consider if it is (1) an inappropriate prescription, (2) adverse effects or interactions that outweigh symptomatic effects or potential future benefits, (3) drugs taken for symptom relief but the symptoms are stable, and (4) drug intended to prevent future severe events but the potential benefit is unlikely to be realized due to limited life expectancy. If the answer to any of the four prompts is yes, then the medication should be considered for deprescribing.
The CEASE algorithm prompts clinicians to consider if the treated condition remains a current concern for their patient.
The ERASE algorithm prompts clinicians to consider whether the treated condition still requires treatment. The ERASE mnemonic stands for "evaluate diagnostic parameters," "resolved conditions," "ageing normally," "select targets," and "eliminate."
Explicit tools
The Beers Criteria and the STOPP/START criteria present medications that may be inappropriate for use in older adults, including drugs associated with high risk of adverse reactions for this population or lacking evidence for their benefits when safer and more effective alternatives exist. Some countries, such as, Australia have their lists of Potentially Inappropriate Medicines. For people with dementia, the Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) can help clinicians identify when and what to consider deprescribing.
Resources
RxFiles, an academic detailing group based in Saskatchewan, Canada, has developed a tool to help long-term care providers identify potentially inappropriate medications in their residents. Tasmanian Medicare Local has created resources to help clinicians deprescribe. Theoretical Underpinnings of a Model to Reduce Polypharmacy and Its Negative Health Effects: Introducing the Team Approach to Polypharmacy Evaluation and Reduction (TAPER) is a framework to support practitioners in deprescribing.
Practice changes to encourage deprescribing
An expert working group concluded that integrated healthcare provided by multidisciplinary patient-centred teams was the most appropriate approach to promote deprescribing and improve appropriate medication use. Deprescribing rounds in tertiary care hospitals have also been evaluated and shown to improve health-related outcomes.
Barriers and enablers to deprescribing
Barriers
Although many trials have successfully resulted in a reduction in medication use, there are some barriers to deprescribing:
the prescriber's beliefs, attitudes, knowledge, skills, and behaviour
the prescriber's work environment, including work setting, health system and cultural factors
patients' fears about cessation or dislike of medications.
Enablers
the prescriber's beliefs, attitudes, knowledge, skills, and behaviour
the prescriber's work environment, including work setting, health system and cultural factors
the patient's agreement that deprescribing was appropriate,
a structured process for cessation,
the patients' need for influences or reasons to cease medication,
The prescriber and patients were shown to have the most significant influence on each other rather than external influences. 9 out of 10 older people said they would be willing to stop one or more medications if their doctor said it was okay.
See also
Medication Appropriateness Tool for Comorbid Health Conditions During Dementia (MATCH-D)
Beers Criteria
Medication discontinuation
Overmedication
Drug interaction
References
Further reading
A special issue on deprescribing
Pharmaceuticals policy
Geriatrics
Drugs
Prescription of drugs | Deprescribing | [
"Chemistry"
] | 2,330 | [
"Pharmacology",
"Products of chemical industry",
"Drug safety",
"Chemicals in medicine",
"Drugs"
] |
39,197,134 | https://en.wikipedia.org/wiki/Sawyer%20motor | A Sawyer motor or planar motor (also called area drive) is a multi-coordinate drive that can perform several independent movements in one plane. Goods can be transported along any path to any location. In the industrial environment, the planar motor replaces cross tables in machine tools, for example. This class of motors is named for Bruce Sawyer, who invented it in 1968.
Operating principles
The planar motor is a driver/guideless drive system. The planar motor consists of a flat base element ("stator") made of tiles and carriages ("movers") arranged on it. The latter are equipped with mostly cuboid magnets whose magnetization is perpendicular to the plane and which are controlled in the X and Y directions with alternating polarity. The movement of the slides themselves is achieved by further magnets arranged parallel to the plane, thus allowing the slides to move in the X and Z directions. The number and arrangement of magnets perpendicular and parallel to the base determines the degrees of freedom and positioning accuracy.
The operating principle of the planar motor can be traced back to Bruce Sawyer, which is why it is also known as the Sawyer motor. The U.S. engineer applied for a patent for a "Magnetic Positioning Device" in 1966, which was confirmed in 1968.
Application area
Planar motors are mainly used for handling products in individual machines or in machine lines. They combine the dynamics of conventional linear transport systems with magnetic fabric technology, which enables individual and decoupled product transport. In addition, there is the traceability.
Since there is no mechanical connection between the base surface and the carriage, planar motors are characterized by minimal maintenance and cleaning requirements. The cover of the base surface can also be made of stainless steel, glass, or plastic, for example, to protect it from leakage of liquids or cleaning processes.
See also
Linear motor
Tubular linear motor
References
Electric motors
Linear motion | Sawyer motor | [
"Physics",
"Technology",
"Engineering"
] | 387 | [
"Physical phenomena",
"Engines",
"Electric motors",
"Motion (physics)",
"Electrical engineering",
"Linear motion"
] |
39,197,820 | https://en.wikipedia.org/wiki/Saxion | The saxion is the scalar superpartner of the axion, and part of a chiral superfield. The axion represents the CP violating theory of the Standard Model. The axion and saxion are examples of the scalar boson class of particles with a very small mass, and a charge of 0.
Hypothetical elementary particles | Saxion | [
"Physics"
] | 71 | [
"Unsolved problems in physics",
"Particle physics",
"Particle physics stubs",
"Hypothetical elementary particles",
"Physics beyond the Standard Model"
] |
39,198,217 | https://en.wikipedia.org/wiki/Timir%20Datta | Timir Datta is an Indian-American physicist specializing in high transition temperature superconductors and a professor of physics in the department of Physics and Astronomy at the University of South Carolina, in Columbia, South Carolina.
Early life and education
Datta grew up in India along with elder brother Jyotirmoy Datta a noted journalist; his father B.N. Dutt a scion of two land owning families from Khulna and Jessore in south central Bengal (British India) was an eminent sugar-refining engineer and on his mother's side a relative of Michael Madhusudan Dutt, the famed poet. He received a master's degree in theoretical plasma physics from Boston College in 1974 under the direction of Gabor Kalman. Datta also worked at the Jet Propulsion laboratory (JPL) in Pasadena, California, as a pre-doctoral NASA research associate of Robert Somoano. He also collaborated with Carl H. Brans at Loyola University New Orleans on a gravitational problem of frame dragging and worked with John Perdew on the behavior of charge density waves in jellium. Datta is of Bengali origin.
Work and research history
Datta was a NSF post-doctoral fellow with Marvin Silver and studied charge propagation in non-crystalline systems at the University of North Carolina in Chapel Hill. At UNC-CH he continued his theoretical interests and worked on retarded Vander Waals potential with L. H. Ford. Since 1982, he has been on the faculty of the University of South Carolina in Columbia.
He collaborated with several laboratories involved with the early discoveries of high temperature superconductivity, especially the team at NRL, led by Donald U Gubser and Stuart Wolf. This research group at USC was the also first to observe (i) bulk Meissner effect in Tl-copper oxides and thus confirm the discovery by Allen Herman's team at the University of Arkansas of high temperature superconductivity in these compounds. He coined the term "triple digit superconductivity", and his group was the first to observe (ii) fractional quantum hall effect in 3-dimensional carbon.
In a paper with Raphael Tsu he derived the first quantum mechanical wave impedance formula for Schrödinger wave functions. He was also the first to show that Bragg's law of X-ray scattering from crystals is a direct consequence of Euclidean length invariance of the incident wave vector; in fact Max von Laue's three diffraction equations are not independent but related by length conservation.
Datta is an active researcher, with over 100 papers listed in the SAO/NASA Astrophysics Data System (ADS) as of 2014.
Patents
Datta was issued one US patent in 1995: "Flux-trapped superconducting magnets and method of manufacture", with two co-inventors.
Anti-gravity work
Datta was involved in the university-funded development of a "Gravity Generator" in 1996 and 1997, with then-fellow university researcher Douglas Torr. According to a leaked document from the Office of Technology Transfer at the University of South Carolina and confirmed to Wired reporter Charles Platt in 1998, the device would create a "force beam" in any desired direction and the university planned to patent and license this device. Neither information about this university research project nor any "Gravity Generator" device was ever made public.
Despite the apparent less than successful outcome of the "Gravity Generator" development effort with Torr, Datta became interested in the effects of electric fields on gravitation, expanding on Torr's theoretical work on the subject.
Selected publications
See also
Eugene Podkletnov
Ning Li (physicist)
References
External links
Department of Physics and Astronomy at the University of South Carolina
Timir Datta's page at the University of South Carolina
University of South Carolina faculty
Morrissey College of Arts & Sciences alumni
American people of Indian descent
20th-century American physicists
Superconductivity
Anti-gravity
Year of birth missing (living people)
Living people | Timir Datta | [
"Physics",
"Materials_science",
"Astronomy",
"Engineering"
] | 822 | [
"Astronomical hypotheses",
"Physical quantities",
"Superconductivity",
"Materials science",
"Anti-gravity",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
39,198,919 | https://en.wikipedia.org/wiki/Cancer%20systems%20biology | Cancer systems biology encompasses the application of systems biology approaches to cancer research, in order to study the disease as a complex adaptive system with emerging properties at multiple biological scales. Cancer systems biology represents the application of systems biology approaches to the analysis of how the intracellular networks of normal cells are perturbed during carcinogenesis to develop effective predictive models that can assist scientists and clinicians in the validations of new therapies and drugs. Tumours are characterized by genomic and epigenetic instability that alters the functions of many different molecules and networks in a single cell as well as altering the interactions with the local environment. Cancer systems biology approaches, therefore, are based on the use of computational and mathematical methods to decipher the complexity in tumorigenesis as well as cancer heterogeneity.
Cancer systems biology encompasses concrete applications of systems biology approaches to cancer research, notably (a) the need for better methods to distill insights from large-scale networks, (b) the importance of integrating multiple data types in constructing more realistic models, (c) challenges in translating insights about tumorigenic mechanisms into therapeutic interventions, and (d) the role of the tumor microenvironment, at the physical, cellular, and molecular levels. Cancer systems biology therefore adopts a holistic view of cancer aimed at integrating its many biological scales, including genetics, signaling networks, epigenetics, cellular behavior, mechanical properties, histology, clinical manifestations and epidemiology. Ultimately, cancer properties at one scale, e.g., histology, are explained by properties at a scale below, e.g., cell behavior.
Cancer systems biology merges traditional basic and clinical cancer research with “exact” sciences, such as applied mathematics, engineering, and physics. It incorporates a spectrum of “omics” technologies (genomics, proteomics, epigenomics, etc.) and molecular imaging, to generate computational algorithms and quantitative models that shed light on mechanisms underlying the cancer process and predict response to intervention. Application of cancer systems biology include but are not limited to- elucidating critical cellular and molecular networks underlying cancer risk, initiation and progression; thereby promoting an alternative viewpoint to the traditional reductionist approach which has typically focused on characterizing single molecular aberrations.
History
Cancer systems biology finds its roots in a number of events and realizations in biomedical research, as well as in technological advances. Historically cancer was identified, understood, and treated as a monolithic disease. It was seen as a “foreign” component that grew as a homogenous mass, and was to be best treated by excision. Besides the continued impact of surgical intervention, this simplistic view of cancer has drastically evolved. In parallel with the exploits of molecular biology, cancer research focused on the identification of critical oncogenes or tumor suppressor genes in the etiology of cancer. These breakthroughs revolutionized our understanding of molecular events driving cancer progression. Targeted therapy may be considered the current pinnacle of advances spawned by such insights.
Despite these advances, many unresolved challenges remain, including the dearth of new treatment avenues for many cancer types, or the unexplained treatment failures and inevitable relapse in cancer types where targeted treatment exists. Such mismatch between clinical results and the massive amounts of data acquired by omics technology highlights the existence of basic gaps in our knowledge of cancer fundamentals. Cancer Systems Biology is steadily improving our ability to organize information on cancer, in order to fill these gaps. Key developments include:
The generation of comprehensive molecular datasets (genome, transcriptome, epigenomics, proteome, metabolome, etc.)
The Cancer Genome Atlas data collection
Computational algorithms to extract drivers of cancer progression from existing datasets
Statistical and mechanistic modeling of signaling networks
Quantitative modeling of cancer evolutionary processes
Mathematical modeling of cancer cell population growth
Mathematical modeling of cellular responses to therapeutic intervention
Mathematical modeling of cancer metabolism
The practice of Cancer Systems Biology requires close physical integration between scientists with diverse backgrounds. Critical large-scale efforts are also underway to train a new workforce fluent in both the languages of biology and applied mathematics. At the translational level, Cancer Systems Biology should engender precision medicine application to cancer treatment.
Resources
High-throughput technologies enable comprehensive genomic analyses of mutations, rearrangements, copy number variations, and methylation at the cellular and tissue levels, as well as robust analysis of RNA and microRNA expression data, protein levels and metabolite levels.
List of High-Throughput Technologies and the Data they generated, with representative databases and publications
Approaches
The computational approaches used in cancer systems biology include new mathematical and computational algorithms that reflect the dynamic interplay between experimental biology and the quantitative sciences. A cancer systems biology approach can be applied at different levels, from an individual cell to a tissue, a patient with a primary tumour and possible metastases, or to any combination of these situations. This approach can integrate the molecular characteristics of tumours at different levels (DNA, RNA, protein, epigenetic, imaging) and different intervals (seconds versus days) with multidisciplinary analysis. One of the major challenges to its success, besides the challenge posed by the heterogeneity of cancer per se, resides in acquiring high-quality data that describe clinical characteristics, pathology, treatment, and outcomes and integrating the data into robust predictive models
Applications
Modelling Cancer Growth and Development
Mathematical modeling can provide useful context for the rational design, validation and prioritization of novel cancer drug targets and their combinations. Network-based modeling and multi-scale modeling have begun to show promise in facilitating the process of effective cancer drug discovery. Using a systems network modeling approach, Schoerberl et al. identified a previously unknown, complementary and potentially superior mechanism of inhibiting the ErbB receptor signaling network. ErbB3 was found to be the most sensitive node, leading to Akt activation; Akt regulates many biological processes, such as proliferation, apoptosis and growth, which are all relevant to tumor progression. This target driven modelling has paved way for first of its kind clinical trials. Bekkal et al. presented a nonlinear model of the dynamics of a cell population divided into proliferative and quiescent compartments. The proliferative phase represents the complete cell cycle (G (1)-S-G (2)-M) of a population committed to divide at its end. The asymptotic behavior of solutions of the nonlinear model is analysed in two cases, exhibiting tissue homeostasis or tumor exponential growth. The model is simulated and its analytic predictions are confirmed numerically.
Furthermore, advances in hardware and software have enabled the realization of clinically feasible, quantitative multimodality imaging of tissue pathophysiology. Earlier efforts relating to multimodality imaging of cancer have focused on the integration of anatomical and functional characteristics, such as PET-CT and single-photon emission CT (SPECT-CT), whereas more-recent advances and applications have involved the integration of multiple quantitative, functional measurements (for example, multiple PET tracers, varied MRI contrast mechanisms, and PET-MRI), thereby providing a more-comprehensive characterization of the tumour phenotype. The enormous amount of complementary quantitative data generated by such studies is beginning to offer unique insights into opportunities to optimize care for individual patients. Although important technical optimization and improved biological interpretation of multimodality imaging findings are needed, this approach can already be applied informatively in clinical trials of cancer therapeutics using existing tools.
Cancer Genomics
Statistical and mechanistic modelling of cancer progression and development
Clinical response models / Modelling cellular response to therapeutic interventions
Sub-typing in Cancer.
Systems Oncology - Clinical application of Cancer Systems Biology
National funding efforts
In 2004, the US National Cancer Institute launched a program effort on Integrative Cancer Systems Biology to establish Centers for Cancer Systems Biology that focus on the analysis of cancer as a complex biological system. The integration of experimental biology with mathematical modeling will result in new insights in the biology and new approaches to the management of cancer. The program brings clinical and basic cancer researchers together with researchers from mathematics, physics, engineering, information technology, imaging sciences, and computer science to work on unraveling fundamental questions in the biology of cancer.
See also
Systems biology
Bioconductor
References
Cancer
Systems biology | Cancer systems biology | [
"Biology"
] | 1,704 | [
"Systems biology"
] |
39,199,253 | https://en.wikipedia.org/wiki/Percolation%20%28cognitive%20psychology%29 | Percolation (from the Latin word percolatio, meaning filtration) is a theoretical model used to understand the way activation and diffusion of neural activity occurs within neural networks. Percolation is a model used to explain how neural activity is transmitted across the various connections within the brain. Percolation theory can be easily understood by explaining its use in epidemiology. Individuals who are infected with a disease can spread the disease through contact with others in their social network. Those who are more social and come into contact with more people will help to propagate the disease quicker than those who are less social. Factors such as occupation and sociability influence the rate of infection. Now, if one were to think of neurons as individuals and synaptic connections as the social bonds between people, then one can determine how easily messages between neurons will spread. When a neuron fires, the message is transmitted along all synaptic connections to other neurons until it can no longer continue. Synaptic connections are considered either open or closed (like a social or unsocial person) and messages will flow along any and all open connections until they can go no further. Just like occupation and sociability play a key role in the spread of disease, so too do the number of neurons, synaptic plasticity and long-term potentiation when talking about neural percolation.
Percolating cluster
A key aspect of percolation is the concept of percolating clusters, which are single large groups of neurons that are all connected by open bonds and take up the majority of the network. Any signals that originate at any point within the percolating cluster will have a greater impact and diffusion across the network than signals that originate outside of the cluster. This is similar to a teacher spreading an infection to a whole community through contact with the students and subsequently with the families than an isolated businessman that works from home.
History and background
Percolation theory was originally purposed by Broadbent and Hammersley as a mathematical theory for determining the flow of fluids through porous material. An example of this is the question originally purposed by Broadbent and Hammersley: "suppose a large porous rock is submerged under water for a long time, will the water reach the center of the stone?". Since its founding, percolation theory has been used in both applied fields and mathematical modeling, areas such as engineering, physics, chemistry, communications, economics, mathematics, medicine and geography. From a mathematical perspective, percolation is uniquely able to exhibit both algebraic and probabilistic relationships graphically. In network and cognitive sciences, percolation theory is often used as a computational model that has the benefit of testing theories on neural activity before any physical testing is necessary. It can also be used as a model to explain experimental observations of neural activity to a certain extent.
Current research
Percolation has been developed outside of the cognitive sciences; however, its application in the field has proven it to be a useful tool for understanding neural processes. Researchers have focused their attention not only studying how neural activity is diffused across networks, but also how percolation and its aspect of phase transition can affect decision making and thought processes. Percolation theory has enabled researchers to better understand many psychological conditions, such as epilepsy, disorganized schizophrenia and divergent thinking. These conditions are often indicative of percolating clusters and their involvement in propagating the excess firing of neurons. Seizures occur when neurons in the brain fire simultaneously, and often these seizures can occur in one part of the brain which may then transfer to other parts. Researchers are able to facilitate a better understanding of these conditions because "the neurons involved in a seizure are analogous to the sites in a percolating cluster". Disorganized schizophrenia is more complex as the activity is indicative activity in a percolating cluster; however, some researchers have suggested that the percolation of information does not occur in a small cluster but on a global functional scale. Attention as well as percolation also plays a key role in disorganized and divergent thinking; however, it is more likely that directed percolation, that is a directionally controlled percolation, is more useful to study divergent thinking and creativity.
Table of recent research
Below is a table of some of the studies and experiments that have involved percolation. The majority of these studies focus on the application of percolation theory to neural network processing from a computational approach.
Other applications
Percolation theory has been applied to a wide variety of fields of study, including medicine, economics, physics, as well as other areas of psychology, such as social sciences and industrial and organizational psychology. Below is a table of other areas of study that apply percolation theory as well as recent research information.
Future research
Percolation theory is widely used and impacts many different fields; however, the research in network science can still be developed further. As a computational model, percolation has its limitations in that it cannot always account for the variability of real-life neural networks. Its limitations do not hinder its functionality in total, just in some cases. In order for one to understand small-world networks better, a closer objective look at percolation in neural networks is needed. The best possible way for this to occur would be to combine the applications of percolation modelling and experimental stimulation of artificial neural networks.
References
Cognitive psychology | Percolation (cognitive psychology) | [
"Biology"
] | 1,112 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
31,096,145 | https://en.wikipedia.org/wiki/NPSOL | NPSOL is a software package that performs numerical optimization. It solves nonlinear constrained problems using the sequential quadratic programming algorithm. It was written in Fortran by Philip Gill of UCSD and Walter Murray, Michael Saunders and Margaret Wright of Stanford University. The name derives from a combination of NP for nonlinear programming and SOL, the Systems Optimization Laboratory at Stanford.
References
External links
Description of NPSOL on the Stanford Business Software, Inc. website.
Mathematical software | NPSOL | [
"Mathematics"
] | 92 | [
"Mathematical software"
] |
31,100,446 | https://en.wikipedia.org/wiki/Immortalised%20cell%20line | An immortalised cell line is a population of cells from a multicellular organism that would normally not proliferate indefinitely but, due to mutation, have evaded normal cellular senescence and instead can keep undergoing division. The cells can therefore be grown for prolonged periods in vitro. The mutations required for immortality can occur naturally or be intentionally induced for experimental purposes. Immortal cell lines are a very important tool for research into the biochemistry and cell biology of multicellular organisms. Immortalised cell lines have also found uses in biotechnology.
An immortalised cell line should not be confused with stem cells, which can also divide indefinitely, but form a normal part of the development of a multicellular organism.
Relation to natural biology and pathology
There are various immortal cell lines. Some of them are normal cell lines (e.g. derived from stem cells). Other immortalised cell lines are the in vitro equivalent of cancerous cells. Cancer occurs when a somatic cell that normally cannot divide undergoes mutations that cause deregulation of the normal cell cycle controls, leading to uncontrolled proliferation. Immortalised cell lines have undergone similar mutations, allowing a cell type that would normally not be able to divide to be proliferated in vitro. The origins of some immortal cell lines – for example, HeLa human cells – are from naturally occurring cancers. HeLa, the first immortal human cell line on record to be successfully isolated and proliferated by a laboratory, was taken from Henrietta Lacks in 1951 at Johns Hopkins Hospital in Baltimore, Maryland.
Role and uses
Immortalised cell lines are widely used as a simple model for more complex biological systems – for example, for the analysis of the biochemistry and cell biology of mammalian (including human) cells. The main advantage of using an immortal cell line for research is its immortality; the cells can be grown indefinitely in culture. This simplifies analysis of the biology of cells that may otherwise have a limited lifetime.
Immortalised cell lines can also be cloned, giving rise to a clonal population that can, in turn, be propagated indefinitely. This allows an analysis to be repeated many times on genetically identical cells, which is desirable for repeatable scientific experiments. The alternative, performing an analysis on primary cells from multiple tissue donors, does not have this advantage.
Immortalised cell lines find use in biotechnology, where they are a cost-effective way of growing cells similar to those found in a multicellular organism in vitro. The cells are used for a wide variety of purposes, from testing toxicity of compounds or drugs to production of eukaryotic proteins.
Limitations
Changes from nonimmortal origins
While immortalised cell lines often originate from a well-known tissue type, they have undergone significant mutations to become immortal. This can alter the biology of the cell and must be taken into consideration in any analysis. Further, cell lines can change genetically over multiple passages, leading to phenotypic differences among isolates and potentially different experimental results depending on when and with what strain isolate an experiment is conducted.
Contamination with other cells
Many cell lines that are widely used for biomedical research have been contaminated and overgrown by other, more aggressive cells. For example, supposed thyroid lines were actually melanoma cells, supposed prostate tissue was actually bladder cancer, and supposed normal uterine cultures were actually breast cancer.
Methods of generation
There are several methods for generating immortalised cell lines:
Isolation from a naturally occurring cancer. This is the original method for generating an immortalised cell line. A major example is human HeLa, a line derived from cervical cancer cells taken on February 8, 1951 from Henrietta Lacks, a 31-year-old African-American mother of five, who died of cancer on October 4, 1951.
Introduction of a viral gene that partially deregulates the cell cycle (e.g., the adenovirus type 5 E1 gene was used to immortalise the HEK 293 cell line; the Epstein–Barr virus can immortalise B lymphocytes by infection).
Artificial expression of key proteins required for immortality, for example telomerase which prevents degradation of chromosome ends during DNA replication in eukaryotes.
Hybridoma technology, specifically used for generating immortalised antibody-producing B cell lines, where an antibody-producing B cell is fused with a myeloma (B cell cancer) cell.
Examples
There are several examples of immortalised cell lines, each with different properties. Most immortalised cell lines are classified by the cell type they originated from or are most similar to biologically
3T3 cells – a mouse fibroblast cell line derived from a spontaneous mutation in cultured mouse embryo tissue.
A549 cells – derived from a cancer patient lung tumor
HeLa cells – a widely used human cell line isolated from cervical cancer patient Henrietta Lacks
HEK 293 cells – derived from human fetal cells
Huh7 cells – hepatocyte-derived carcinoma cell line
Jurkat cells – a human T lymphocyte cell line isolated from a case of leukemia
OK cells – derived from female North American opossum kidney cells
Ptk2 cells – derived from male long-nosed potoroo epithelial kidney cells
Vero cells – a monkey kidney cell line that arose by spontaneous immortalisation
See also
Cellosaurus, knowledge base of cell lines
IGRhCellID, database of cell lines
List of breast cancer cell lines
References
External links
ATCC – American Type Culture Collection
Cellosaurus – a knowledge resource on cell lines
CellBank Australia – Australia's national not-for-profit cell line repository
Cell culture techniques
Cellular senescence
Cell line
Senescence | Immortalised cell line | [
"Chemistry",
"Biology"
] | 1,129 | [
"Biochemistry methods",
"Cell culture techniques",
"Senescence",
"Cellular senescence",
"Cellular processes",
"Metabolism"
] |
31,102,294 | https://en.wikipedia.org/wiki/Geometric%20Algebra%20%28book%29 | Geometric Algebra is a book written by Emil Artin and published by Interscience Publishers, New York, in 1957. It was republished in 1988 in the Wiley Classics series ().
In 1962 Algèbre Géométrique, a translation into French by Michel Lazard, was published by Gauthier-Villars, and reprinted in 1996. () In 1968 a translation into Italian was published in Milan by Feltrinelli. In 1969 a translation into Russian was published in Moscow by Nauka
Long anticipated as the sequel to Moderne Algebra (1930), which Bartel van der Waerden published as his version of notes taken in a course with Artin, Geometric Algebra is a research monograph suitable for graduate students studying mathematics. From the Preface:
Linear algebra, topology, differential and algebraic geometry are the indispensable tools of the mathematician of our time. It is frequently desirable to devise a course of geometric nature which is distinct from these great lines of thought and which can be presented to beginning graduate students or even to advanced undergraduates. The present book has grown out of lecture notes for a course of this nature given at New York University in 1955. This course centered around the foundations of affine geometry, the geometry of quadratic forms and the structure of the general linear group. I felt it necessary to enlarge the content of these notes by including projective and symplectic geometry and also the structure of the symplectic and orthogonal groups.
The book is illustrated with six geometric configurations in chapter 2, which retraces the path from geometric to field axioms previously explored by Karl von Staudt and David Hilbert.
Contents
Chapter one is titled "Preliminary Notions". The ten sections explicate notions of set theory, vector spaces, homomorphisms, duality, linear equations, group theory, field theory, ordered fields and valuations. On page vii Artin says "Chapter I should be used mainly as a reference chapter for the proofs of certain isolated theorems."
Chapter two is titled "Affine and Projective Geometry". Artin posits this challenge to generate algebra (a field k) from geometric axioms:
Given a plane geometry whose objects are the elements of two sets, the set of points and the set of lines; assume that certain axioms of a geometric nature are true. Is it possible to find a field k such that the points of our geometry can be described by coordinates from k and the lines by linear equations ?
The reflexive variant of parallelism is invoked: parallel lines have either all or none of their points in common. Thus a line is parallel to itself.
Axiom 1 requires a unique line for each pair of distinct points, and a unique point of intersection of non-parallel lines. Axiom 2 depends on a line and a point; it requires a unique parallel to the line and through the point. Axiom 3 requires three non-collinear points. Axiom 4a requires a translation to move any point to any other. Axiom 4b requires a dilation at P to move Q to R when the three points are collinear.
Artin writes the line through P and Q as P + Q. To define a dilation he writes, "Let two distinct points P and Q and their images P′ and Q′ be given." To suggest the role of incidence in geometry, a dilation is specified by this property: "If l′ is the line parallel to P + Q which passes through P′, then Q′ lies on l′." Of course, if P′ ≠ Q′, then this condition implies P + Q is parallel to P′ + Q′, so that the dilation is an affine transformation.
The dilations with no fixed points are translations, and the group of translations T is shown to be an invariant subgroup of the group of dilations.
For a dilation σ and a point P, the trace is P + σP. The mappings T → T that are trace-preserving homomorphisms are the elements of k. First k is shown to be an associative ring with 1, then a skew field.
Conversely, there is an affine geometry based on any given skew field k. Axioms 4a and 4b are equivalent to Desargues' theorem. When Pappus's hexagon theorem holds in the affine geometry, k is commutative and hence a field.
Chapter three is titled "Symplectic and Orthogonal Geometry". It begins with metric structures on vector spaces before defining symplectic and orthogonal geometry and describing their common and special features. There are sections on geometry over finite fields and over ordered fields.
Chapter four is on general linear groups. First there is Jean Dieudonne's theory of determinants over "non-commutative fields" (division rings). Artin describes GL(n, k) group structure. More details are given about vector spaces over finite fields.
Chapter five is "The Structure of Sympletic and Orthogonal Groups". It includes sections on elliptic spaces, Clifford algebra, and spinorial norm.
Reviews
Alice T. Schafer wrote "Mathematicians will find on many pages ample evidence of the author’s ability to penetrate a subject and to present material in a particularly elegant manner." She notes the overlap between Artin's text and Baer's Linear Algebra and Projective Geometry or Dieudonné's La Géometrie des Groupes Classique.
Jean Dieudonné reviewed the book for Mathematical Reviews and placed it on a level with Hilbert's Grundlagen der Geometrie.
References
Geometric Algebra at Internet Archive
1957 non-fiction books
Mathematics textbooks
Foundations of geometry | Geometric Algebra (book) | [
"Mathematics"
] | 1,165 | [
"Foundations of geometry",
"Mathematical axioms"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.