id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
30,839,171 | https://en.wikipedia.org/wiki/Dynamical%20mean-field%20theory | Dynamical mean-field theory (DMFT) is a method to determine the electronic structure of strongly correlated materials. In such materials, the approximation of independent electrons, which is used in density functional theory and usual band structure calculations, breaks down. Dynamical mean-field theory, a non-perturbative treatment of local interactions between electrons, bridges the gap between the nearly free electron gas limit and the atomic limit of condensed-matter physics.
DMFT consists in mapping a many-body lattice problem to a many-body local problem, called an impurity model. While the lattice problem is in general intractable, the impurity model is usually solvable through various schemes. The mapping in itself does not constitute an approximation. The only approximation made in ordinary DMFT schemes is to assume the lattice self-energy to be a momentum-independent (local) quantity. This approximation becomes exact in the limit of lattices with an infinite coordination.
One of DMFT's main successes is to describe the phase transition between a metal and a Mott insulator when the strength of electronic correlations is increased. It has been successfully applied to real materials, in combination with the local density approximation of density functional theory.
Relation to mean-field theory
The DMFT treatment of lattice quantum models is similar to the mean-field theory (MFT) treatment of classical models such as the Ising model. In the Ising model, the lattice problem is mapped onto an effective single site problem, whose magnetization is to reproduce the lattice magnetization through an effective "mean-field". This condition is called the self-consistency condition. It stipulates that the single-site observables should reproduce the lattice "local" observables by means of an effective field. While the N-site Ising Hamiltonian is hard to solve analytically (to date, analytical solutions exist only for the 1D and 2D case), the single-site problem is easily solved.
Likewise, DMFT maps a lattice problem (e.g. the Hubbard model) onto a single-site problem. In DMFT, the local observable is the local Green's function. Thus, the self-consistency condition for DMFT is for the impurity Green's function to reproduce the lattice local Green's function through an effective mean-field which, in DMFT, is the hybridization function of the impurity model. DMFT owes its name to the fact that the mean-field is time-dependent, or dynamical. This also points to the major difference between the Ising MFT and DMFT: Ising MFT maps the N-spin problem into a single-site, single-spin problem. DMFT maps the lattice problem onto a single-site problem, but the latter fundamentally remains a N-body problem which captures the temporal fluctuations due to electron-electron correlations.
Description of DMFT for the Hubbard model
The DMFT mapping
Single-orbital Hubbard model
The Hubbard model describes the onsite interaction between electrons of opposite spin by a single parameter, . The Hubbard Hamiltonian may take the following form:
where, on suppressing the spin 1/2 indices , denote the creation and annihilation operators of an electron on a localized orbital on site , and .
The following assumptions have been made:
only one orbital contributes to the electronic properties (as might be the case of copper atoms in superconducting cuprates, whose -bands are non-degenerate),
the orbitals are so localized that only nearest-neighbor hopping is taken into account
The auxiliary problem: the Anderson impurity model
The Hubbard model is in general intractable under usual perturbation expansion techniques. DMFT maps this lattice model onto the so-called Anderson impurity model (AIM). This model describes the interaction of one site (the impurity) with a "bath" of electronic levels (described by the annihilation and creation operators and ) through a hybridization function. The Anderson model corresponding to our single-site model is a single-orbital Anderson impurity model, whose hamiltonian formulation, on suppressing some spin 1/2 indices , is:
where
describes the non-correlated electronic levels of the bath
describes the impurity, where two electrons interact with the energetical cost
describes the hybridization (or coupling) between the impurity and the bath through hybridization terms
The Matsubara Green's function of this model, defined by , is entirely determined by the parameters and the so-called hybridization function , which is the imaginary-time Fourier-transform of .
This hybridization function describes the dynamics of electrons hopping in and out of the bath. It should reproduce the lattice dynamics such that the impurity Green's function is the same as the local lattice Green's function. It is related to the non-interacting Green's function by the relation:
(1)
Solving the Anderson impurity model consists in computing observables such as the interacting Green's function for a given hybridization function and . It is a difficult but not intractable problem. There exists a number of ways to solve the AIM, such as
Numerical renormalization group
Exact diagonalization
Iterative perturbation theory
Non-crossing approximation
Continuous-time quantum Monte Carlo algorithms
Self-consistency equations
The self-consistency condition requires the impurity Green's function to coincide with the local lattice Green's function :
where denotes the lattice self-energy.
DMFT approximation: locality of the lattice self-energy
The only DMFT approximations (apart from the approximation that can be made in order to solve the Anderson model) consists in neglecting the spatial fluctuations of the lattice self-energy, by equating it to the impurity self-energy:
This approximation becomes exact in the limit of lattices with infinite coordination, that is when the number of neighbors of each site is infinite. Indeed, one can show that in the diagrammatic expansion of the lattice self-energy, only local diagrams survive when one goes into the infinite coordination limit.
Thus, as in classical mean-field theories, DMFT is supposed to get more accurate as the dimensionality (and thus the number of neighbors) increases. Put differently, for low dimensions, spatial fluctuations will render the DMFT approximation less reliable.
Spatial fluctuations also become relevant in the vicinity of phase transitions. Here, DMFT and classical mean-field theories result in mean-field critical exponents, the pronounced changes before the phase transition are not reflected in the DMFT self-energy.
The DMFT loop
In order to find the local lattice Green's function, one has to determine the hybridization function such that the corresponding impurity Green's function will coincide with the sought-after local lattice Green's function.
The most widespread way of solving this problem is by using a forward recursion method, namely, for a given , and temperature :
Start with a guess for (typically, )
Make the DMFT approximation:
Compute the local Green's function
Compute the dynamical mean field
Solve the AIM for a new impurity Green's function , extract its self-energy:
Go back to step 2 until convergence, namely when .
Applications
The local lattice Green's function and other impurity observables can be used to calculate a number of physical quantities as a function of correlations , bandwidth, filling (chemical potential ), and temperature :
the spectral function (which gives the band structure)
the kinetic energy
the double occupancy of a site
response functions (compressibility, optical conductivity, specific heat)
In particular, the drop of the double-occupancy as increases is a signature of the Mott transition.
Extensions of DMFT
DMFT has several extensions, extending the above formalism to multi-orbital, multi-site problems, long-range correlations and non-equilibrium.
Multi-orbital extension
DMFT can be extended to Hubbard models with multiple orbitals, namely with electron-electron interactions of the form where and denote different orbitals. The combination with density functional theory (DFT+DMFT) then allows for a realistic calculation of correlated materials.
Extended DMFT
Extended DMFT yields a local impurity self energy for non-local interactions and hence allows us to apply DMFT for more general models such as the t-J model.
Cluster DMFT
In order to improve on the DMFT approximation, the Hubbard model can be mapped on a multi-site impurity (cluster) problem, which allows one to add some spatial dependence to the impurity self-energy. Clusters contain 4 to 8 sites at low temperature and up to 100 sites at high temperature.
The Typical Medium Dynamical Cluster Approximation (TMDCA) is a non-perturbative approach for obtaining the electronic ground state of strongly correlated many-body systems, built on the dynamical cluster approximation (DCA).
Diagrammatic extensions
Spatial dependencies of the self energy beyond DMFT, including long-range correlations in the vicinity of a phase transition, can be obtained also through diagrammatic extensions of DMFT using a combination of analytical and numerical techniques. The starting point of the dynamical vertex approximation and of the dual fermion approach is the local two-particle vertex.
Non-equilibrium
DMFT has been employed to study non-equilibrium transport and optical excitations. Here, the reliable calculation of the AIM's Green function out of equilibrium remains a big challenge. DMFT has also been applied to ecological models in order to describe the mean-field dynamics of a community with a thermodynamic number of species.
References and notes
See also
Strongly correlated material
External links
Strongly Correlated Materials: Insights From Dynamical Mean-Field Theory G. Kotliar and D. Vollhardt
Lecture notes on the LDA+DMFT approach to strongly correlated materials Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
Lecture notes DMFT at 25: Infinite Dimensions Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
Lecture notes DMFT – From Infinite Dimensions to Real Materials Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
Lecture notes Dynamical Mean-Field Theory of Correlated Electrons Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.)
DMFT for two-site Hubbard dimer: in Dynamical Mean-Field Theory for Materials, Eva Pavarini
https://www.cond-mat.de/events/correl21/manuscripts/pavarini.pdf
DMFT for two-site Hubbard dimer: in Solving the strong-correlation problem in materials, Eva Pavarini
https://doi.org/10.1007/s40766-021-00025-8
Correlated electrons
Materials science
Quantum mechanics
Electronic structure methods | Dynamical mean-field theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,268 | [
"Applied and interdisciplinary physics",
"Quantum chemistry",
"Theoretical physics",
"Quantum mechanics",
"Materials science",
"Computational physics",
"Electronic structure methods",
"Computational chemistry",
"Condensed matter physics",
"nan",
"Correlated electrons"
] |
21,734,002 | https://en.wikipedia.org/wiki/Eurybacteria | Eurybacteria is a taxon created by Cavalier-Smith, which includes several groups of Gram-negative bacteria. In this model, it is the ancestor of gram positive bacteria. Their endospores are characterized by producing and presenting external flagella or mobility by bacterial displacement.
Members
Specifically, it includes:
Fusobacteria. For example, Leptotrichia and Fusobacterium
Togobacteria. For example, Thermotoga.
In the standard classification, Selenobacteria are usually included in the phylum Bacillota, whereas fusobacteria and togobacteria are classified as their own groups.
Relationships
The following graph shows Cavalier-Smith's version of the tree of life, indicating the status of eurybacteria.
References
Bacteria by classification
Bacteriology
Obsolete bacteria taxa | Eurybacteria | [
"Biology"
] | 176 | [
"Bacteria by classification",
"Bacteria"
] |
21,736,877 | https://en.wikipedia.org/wiki/Journal%20of%20Chemical%20Theory%20and%20Computation | Journal of Chemical Theory and Computation is a peer-reviewed scientific journal, established in 2005 by the American Chemical Society. It is indexed in Chemical Abstracts Service (CAS), Scopus, British Library, and Web of Science. The current editor-in-chief is Laura Gagliardi. Currently as of the year 2022, JCTC has 18 volumes.
Scope
Much of the JCTC reports on new theories, methods and applications of quantum chemical knowledge, such as electronic structure, molecular mechanics and statistical mechanics. Research of computational applications such as ab initio quantum mechanics, Monte Carlo simulations and solvation model are discussed among others. It is stated that "the Journal favors submissions that include advances in theory or methodology with applications to compelling problems".
History
The first issue of JCTC was published in 2005 as a bimonthly journal with 133 articles published in the first volume. In 2008, JCTC increased their output to become a monthly publication, producing 12 issues per year. The journal came about when Jorgensen noticed that although theory and computation could be found in many journals, the field did not have a dedicated journal. In year 2008, JCTC took professor Ursula Rothlisberger from the Swiss Federal Institute of Technology as their associate editor. In the year 2009, the editorial team was further expanded with the addition of Professor Gustavo Scuseria from Rice University.
Statistics
JCTC is ranked number 4 highest in the list of "Physical and Theoretical Chemistry" journals SCImago Journal Rank. JCTC has a weighted rank indicator (SJR) of 2,481 as of the year 2014. To give perspective, the popular multidisciplinary journal Nature (journal) has an SJR factor of 17,313. The journal had an average of 65.73 references per document in 2014. In 2020, the journal had a total of 41,591 citations and an impact factor of 6.006. According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.5.
See also
Physical Chemistry Chemical Physics
Journal of Chemical Physics
Computational and Theoretical Chemistry (formerly known as THEOCHEM)
Journal of Computational Chemistry
Annual Review of Physical Chemistry
International Journal of Quantum Chemistry
References
American Chemical Society academic journals
Monthly journals
English-language journals
Academic journals established in 2005
Computational chemistry
Quantum chemistry
Theoretical chemistry | Journal of Chemical Theory and Computation | [
"Physics",
"Chemistry"
] | 468 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
21,738,717 | https://en.wikipedia.org/wiki/Remote%20plasma | A remote plasma (also downstream plasma or afterglow plasma) is a plasma processing method in which the plasma and material interaction occurs at a location remote from the plasma in the plasma afterglow.
See also
Chemical vapor deposition
Corona treatment
Physical vapor deposition
Plasma activation
Plasma chemistry
Plasma cleaning
Plasma-activated bonding
Reactive ion etching
References
Plasma processing
Semiconductor device fabrication | Remote plasma | [
"Materials_science",
"Engineering"
] | 72 | [
"Semiconductor device fabrication",
"Materials science stubs",
"Materials science",
"Microtechnology"
] |
21,740,132 | https://en.wikipedia.org/wiki/Cosmic-ray%20observatory | A cosmic-ray observatory is a scientific installation built to detect high-energy-particles coming from space called cosmic rays. This typically includes photons (high-energy light), electrons, protons, and some heavier nuclei, as well as antimatter particles. About 90% of cosmic rays are protons, 9% are alpha particles, and the remaining ~1% are other particles.
It is not yet possible to build image forming optics for cosmic rays, like a Wolter telescope for lower energy X-rays, although some cosmic-ray observatories also look for high energy gamma rays and x-rays. Ultra-high-energy cosmic rays (UHECR) pose further detection problems. One way of learning about cosmic rays is using different detectors to observe aspects of a cosmic ray air shower.
Methods of detection for gamma-rays:
Scintillation detectors
Solid state detectors
Compton scattering
Pair telescopes
Air Cherenkov detectors
For example, while a visible light photon may have an energy of a few eV, a cosmic gamma ray may exceed a TeV (1,000,000,000,000 eV). Sometimes cosmic gamma rays (photons) are not grouped with nuclei cosmic rays.
History
"In 1952, a simple and audacious experiment allowed the first observation of Cherenkov light produced by cosmic rays passing through the atmosphere, giving birth to a new field of astronomy". This work, involving minimal instrument expense (a dustbin, a war-surplus parabolic mirror, and a 5 cm diameter photomultiplier tube), and based on a suggestion by Patrick Blackett, led ultimately to the current international multibillion-dollar investment in gamma ray astronomy.
The Explorer 1 satellite launched in 1958 subsequently measured cosmic rays. Anton 314 omnidirectional Geiger-Müller tube, designed by George H. Ludwig of the State University of Iowa Cosmic Ray Laboratory, detected cosmic rays. It could detect protons with energy over 30 MeV and electrons with energy over 3 MeV. Most of the time the instrument was saturated;
Sometimes the instrumentation would report the expected cosmic ray count (approximately thirty counts per second) but sometimes it would show a peculiar zero counts per second. The University of Iowa (under Van Allen) noted that all of the zero counts per second reports were from an altitude of 2,000+ km (1,250+ miles) over South America, while passes at would show the expected level of cosmic rays. This is called the South Atlantic Anomaly. Later, after Explorer 3, it was concluded that the original Geiger counter had been overwhelmed ("saturated") by strong radiation coming from a belt of charged particles trapped in space by the Earth's magnetic field. This belt of charged particles is now known as the Van Allen radiation belt.
Cosmic rays were studied aboard the space station Mir in the late 20th century, such as with the SilEye experiment. This studied the relationship between flashes seen by astronauts in space and cosmic rays, the cosmic ray visual phenomena.
In December 1993, the Akeno Giant Air Shower Array in Japan (abbreviated AGASA) recorded one of the highest energy cosmic ray events ever observed.
In October 2003, the Pierre Auger Observatory in Argentina completed construction on its 100th surface detector and became the largest cosmic-ray array in the world. It detects cosmic rays through the use of two different methods: watching Cherenkov radiation made when particles interact with water, and observing ultraviolet light emitted in the Earth's atmosphere. In 2018, the installation of an upgrade called AugerPrime has started adding scintillation and radio detectors to the Observatory.
In 2010, an expanded version of AMANDA named IceCube was completed. IceCube measures Cherenkov light in a cubic kilometer of transparent ice. It is estimated to detect 275 million cosmic rays every day.
Space shuttle Endeavor transported the Alphamagnetic Spectrometer (AMS) to the International Space Station on May 16, 2011. In just over one year of operating, the AMS collected data on 17 billion cosmic-ray events.
Observatories and experiments
There are a number of cosmic ray research initiatives. These include, but are not limited to:
Ground based
ALBORZ Observatory
ERGO
CHICOS
GAMMA
KASCADE-(Grande) – KArlsruhe Shower Core and Array DEtector (and its extension called 'Grande')
Large High Altitude Air Shower Observatory
LOPES – the LOFAR PrototypE Station is the radio extension of KASCADE.
TAIGA – Tunka Advanced Instrument for cosmic ray physics and Gamma Astronomy
HAWC High Altitude Water Cherenkov
High Energy Stereoscopic System
High Resolution Fly's Eye Cosmic Ray Detector
MAGIC (telescope)
MARIACHI
Pierre Auger Observatory
Project GRAND
Southern Wide-field Gamma-ray Observatory
Telescope Array Project
WALTA (Washington Large Area Time Coincidence Array)
IceTop
TACTIC
VERITAS
Major Atmospheric Cerenkov Experiment Telescope (MACE)
Satellite based
PAMELA
Alpha Magnetic Spectrometer
Spaceship Earth
ACE (Advanced Composition Explorer)
Voyager 1 and Voyager 2
Cassini-Huygens
HEAO 1, Einstein Observatory (HEAO2), HEAO 3
ISS-CREAM
Balloon-borne
BESS (Balloon-borne Experiment with Superconducting Spectrometer)
ATIC (Advanced Thin Ionization Calorimeter)
TRACER (cosmic ray detector)
BOOMERanG experiment
TIGER
Cosmic Ray Energetics and Mass (CREAM)
AESOP (Anti-Electron Sub-Orbital Payload)
General antiparticle spectrometer (GAPS)
Ultra high energy cosmic rays
Observatories for ultra-high-energy cosmic rays:
MARIACHI – Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization located on Long Island, USA.
GRAPES-3 (Gamma Ray Astronomy PeV EnergieS 3rd establishment) is a project for cosmic ray study with air shower detector array and large area muon detectors at Ooty in southern India.
AGASA – Akeno Giant Air Shower Array in Japan
High Resolution Fly's Eye Cosmic Ray Detector (HiRes)
Yakutsk Extensive Air Shower Array
Pierre Auger Observatory
Extreme Universe Space Observatory
Telescope Array Project
Antarctic Impulse Transient Antenna (ANITA) detects ultra-high-energy cosmic neutrinos believed to be caused by ultra-high-energy cosmic rays
The COSMICi project at Florida A&M University is developing technology for a distributed network of low-cost detectors for UHECR showers in collaboration with MARIACHI.
See also
CREDO
Extragalactic cosmic ray
Gamma-ray telescopes (Alphabetic list)
Gamma-ray astronomy and X-ray astronomy
Cosmic Ray System (CR instrument on the Voyagers)
References
Further reading
→ A good introduction to ultra-high energy cosmic rays.
External links
"Strange Instrument Built To Solve Mystery Of Cosmic Rays", April 1932, Popular Science
The Highest Energy Particle Ever Recorded The details of the event from the official site of the Fly's Eye detector.
John Walker's lively analysis of the 1991 event, published in 1994
Origin of energetic space particles pinpointed, by Mark Peplow for news@nature.com, published January 13, 2005.
List of cosmic ray detectors (archived 30 December 2012)
Cosmic rays
Cosmic-ray experiments
Observatories | Cosmic-ray observatory | [
"Physics",
"Astronomy"
] | 1,454 | [
"Physical phenomena",
"Astronomical observatories",
"Astrophysics",
"Astronomy organizations",
"Radiation",
"Cosmic rays"
] |
21,742,365 | https://en.wikipedia.org/wiki/Clausius%E2%80%93Duhem%20inequality | The Clausius–Duhem inequality is a way of expressing the second law of thermodynamics that is used in continuum mechanics. This inequality is particularly useful in determining whether the constitutive relation of a material is thermodynamically allowable.
This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved. It was named after the German physicist Rudolf Clausius and French physicist Pierre Duhem.
Clausius–Duhem inequality in terms of the specific entropy
The Clausius–Duhem inequality can be expressed in integral form as
In this equation is the time, represents a body and the integration is over the volume of the body, represents the surface of the body, is the mass density of the body, is the specific entropy (entropy per unit mass), is the normal velocity of , is the velocity of particles inside , is the unit normal to the surface, is the heat flux vector, is an energy source per unit mass, and is the absolute temperature. All the variables are functions of a material point at at time .
In differential form the Clausius–Duhem inequality can be written as
where is the time derivative of and is the divergence of the vector .
Clausius–Duhem inequality in terms of specific internal energy
The inequality can be expressed in terms of the internal energy as
where is the time derivative of the specific internal energy (the internal energy per unit mass), is the Cauchy stress, and is the gradient of the velocity. This inequality incorporates the balance of energy and the balance of linear and angular momentum into the expression for the Clausius–Duhem inequality.
Dissipation
The quantity
is called the dissipation which is defined as the rate of internal entropy production per unit volume times the absolute temperature. Hence the Clausius–Duhem inequality is also called the dissipation inequality. In a real material, the dissipation is always greater than zero.
See also
Entropy
Second law of thermodynamics
References
External links
Memories of Clifford Truesdell by Bernard D. Coleman, Journal of Elasticity, 2003.
Thoughts on Thermomechanics by Walter Noll, 2008.
Continuum mechanics | Clausius–Duhem inequality | [
"Physics"
] | 454 | [
"Classical mechanics",
"Continuum mechanics"
] |
41,596,497 | https://en.wikipedia.org/wiki/Magnetic%203D%20bioprinting | Magnetic 3D bioprinting is a process that utilizes biocompatible magnetic nanoparticles to print cells into 3D structures or 3D cell cultures. In this process, cells are tagged with magnetic nanoparticles, thus making them magnetic. Once magnetic, these cells can be rapidly printed into specific 3D patterns using external magnetic forces that mimic tissue structure and function.
General principle
Magnetic 3D bioprinting is an alternative to other 3D printing methods such as extrusion, photolithography, and stereolithography. Benefits of the technique include its rapid process (15 minutes – 1 hour), compared to the often days-long processes of others, the capacity for endogenous synthesis of extracellular matrix (ECM) without the need for an artificial protein substrate and fine spatial control, and the capacity for 3D cell culture models to be printed from simple spheroids and rings into more complex organotypic models such as the lung, aortic valve, and white fat.
Process
Using magnetic nanoparticles
The cells first need to be incubated in the presence of magnetic nanoparticles to make them susceptible to manipulation through magnetic fields. The system is a nanoparticle assembly consisting of gold, magnetic iron oxide, and poly-L-lysine which assists in adhesion to the cell membrane via electrostatic interactions. In this system, cells are printed into 3D patterns (rings or dots) using fields generated by permanent magnets. The cells within the printed construct interact with surrounding cells and the ECM to migrate, proliferate, and ultimately shrink the structure, typically within 24 hours.
When used as a toxicity assay, this shrinkage varies with drug concentration and is a label-free metric of cell function that can be captured and measured with brightfield imaging. The size of the pattern can be captured using an iPod-based system, which is programmed using an app (Experimental Assistant) to image whole plates of up to 96 structures at intervals as short as one second.
Using diamagnetism
Cells can be assembled without using magnetic nanoparticles by employing diamagnetism. Some materials are more strongly attracted, or susceptible, to magnets than others. Materials with greater magnetic susceptibility will experience stronger attraction to a magnet and move towards it. The more weakly attracted material with lower susceptibility is displaced to lower magnetic field regions that lie away from the magnet. By designing magnetic fields through careful arrangement of magnets, it is possible to use the differences in the magnetic susceptibilities of two materials to concentrate only one within a volume.
An example of usage of this technique is when bio-ink was formulated by suspending human breast cancer cells in a cell culture medium that contained the paramagnetic salt, diethylenetriaminepentaacetic acid gadolinium (III) dihydrogen salt hydrate (Gd-DTPA). Like most cells, these breast cancer cells are much more weakly attracted by magnets than Gd-DTPA, which is an FDA-approved MRI contrast agent for use in humans. Therefore, when a magnetic field was applied, the salt hydrate moved towards the magnets, displacing the cells to a predetermined area of minimum magnetic field strength, which seeded the formation of a 3D cell cluster.
Applications
Magnetic 3D bioprinting can be used to screen for cardiovascular toxicity, which accounts for 30% of cardiac drug withdrawals. Vascular smooth muscle cells are magnetically printed into 3D rings to mimic blood vessels that can contract and dilate. This system could potentially replace experiments using ex vivo tissue, which are costly and yield little data per experiment. Furthermore, magnetic 3D bioprinting can use human cells to approximate a human in vivo response better than with an animal model. This has been demonstrated by the bioassay which combines the benefits of 3D bioprinting in building tissue-like structures for study with the speed of magnetic printing.
See also
Bio-printing
Organovo
References
Further reading
Nanotechnology | Magnetic 3D bioprinting | [
"Materials_science",
"Engineering"
] | 823 | [
"Nanotechnology",
"Materials science"
] |
41,597,450 | https://en.wikipedia.org/wiki/Poisson%20point%20process | In probability theory, statistics and related fields, a Poisson point process (also known as: Poisson random measure, Poisson random point field and Poisson point field) is a type of mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The process's name derives from the fact that the number of points in any given finite region follows a Poisson distribution. The process and the distribution are named after French mathematician Siméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments on radioactive decay, telephone call arrivals and actuarial science.
This point process is used as a mathematical model for seemingly random processes in numerous disciplines including astronomy, biology, ecology,geology, seismology, physics, economics, image processing, and telecommunications.
The Poisson point process is often defined on the real number line, where it can be considered a stochastic process. It is used, for example, in queueing theory to model random events distributed in time, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes. In the plane, the point process, also known as a spatial Poisson process, can represent the locations of scattered objects such as transmitters in a wireless network, particles colliding into a detector or trees in a forest. The process is often used in mathematical models and in the related fields of spatial point processes, stochastic geometry, spatial statistics and continuum percolation theory.
The point process depends on a single mathematical object, which, depending on the context, may be a constant, a locally integrable function or, in more general settings, a Radon measure. In the first case, the constant, known as the rate or intensity, is the average density of the points in the Poisson process located in some region of space. The resulting point process is called a homogeneous or stationary Poisson point process. In the second case, the point process is called an inhomogeneous or nonhomogeneous Poisson point process, and the average density of points depend on the location of the underlying space of the Poisson point process. The word point is often omitted, but there are other Poisson processes of objects, which, instead of points, consist of more complicated mathematical objects such as lines and polygons, and such processes can be based on the Poisson point process. Both the homogeneous and nonhomogeneous Poisson point processes are particular cases of the generalized renewal process.
Overview of definitions
Depending on the setting, the process has several equivalent definitions as well as definitions of varying generality owing to its many applications and characterizations. The Poisson point process can be defined, studied and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model; in higher dimensions such as the plane where it plays a role in stochastic geometry and spatial statistics; or on more general mathematical spaces. Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context.
Despite all this, the Poisson point process has two key properties—the Poisson property and the independence property— that play an essential role in all settings where the Poisson point process is used. The two properties are not logically independent; indeed, the Poisson distribution of point counts implies the independence property, while in the converse direction the assumptions that: (i) the point process is simple, (ii) has no fixed atoms, and (iii) is a.s. boundedly finite are required.
Poisson distribution of point counts
A Poisson point process is characterized via the Poisson distribution. The Poisson distribution is the probability distribution of a random variable (called a Poisson random variable) such that the probability that equals is given by:
where denotes factorial and the parameter determines the shape of the distribution. (In fact, equals the expected value of .)
By definition, a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable.
Complete independence
Consider a collection of disjoint and bounded subregions of the underlying space. By definition, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others.
This property is known under several names such as complete randomness, complete independence, or independent scattering and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general, which motivates the Poisson process being sometimes called a purely or completely random process.
Homogeneous Poisson point process
If a Poisson point process has a parameter of the form , where is Lebesgue measure (that is, it assigns length, area, or volume to sets) and is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region, where rate is usually used when the underlying space has one dimension. The parameter can be interpreted as the average number of points per some unit of extent such as length, area, volume, or time, depending on the underlying mathematical space, and it is also called the mean density or mean rate; see Terminology.
Interpreted as a counting process
The homogeneous Poisson point process, when considered on the positive half-line, can be defined as a counting process, a type of stochastic process, which can be denoted as . A counting process represents the total number of occurrences or events that have happened up to and including time . A counting process is a homogeneous Poisson counting process with rate if it has the following three properties:
has independent increments; and
the number of events (or points) in any interval of length is a Poisson random variable with parameter (or mean) .
The last property implies:
In other words, the probability of the random variable being equal to is given by:
The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean . The time differences between the events or arrivals are known as interarrival or interoccurence times.
Interpreted as a point process on the real line
Interpreted as a point process, a Poisson point process can be defined on the real line by considering the number of points of the process in the interval . For the homogeneous Poisson point process on the real line with parameter , the probability of this random number of points, written here as , being equal to some counting number is given by:
For some positive integer , the homogeneous Poisson point process has the finite-dimensional distribution given by:
where the real numbers .
In other words, is a Poisson random variable with mean , where . Furthermore, the number of points in any two disjoint intervals, say, and are independent of each other, and this extends to any finite number of disjoint intervals. In the queueing theory context, one can consider a point existing (in an interval) as an event, but this is different to the word event in the probability theory sense. It follows that is the expected number of arrivals that occur per unit of time.
Key properties
The previous definition has two important features shared by Poisson point processes in general:
the number of arrivals in each finite interval has a Poisson distribution;
the number of arrivals in disjoint intervals are independent random variables.
Furthermore, it has a third feature related to just the homogeneous Poisson point process:
the Poisson distribution of the number of arrivals in each interval only depends on the interval's length .
In other words, for any finite , the random variable is independent of , so it is also called a stationary Poisson process.
Law of large numbers
The quantity can be interpreted as the expected or average number of points occurring in the interval , namely:
where denotes the expectation operator. In other words, the parameter of the Poisson process coincides with the density of points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers. More specifically, with probability one:
where denotes the limit of a function, and is expected number of arrivals occurred per unit of time.
Memoryless property
The distance between two consecutive points of a point process on the real line will be an exponential random variable with parameter (or equivalently, mean ). This implies that the points have the memoryless property: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing, but this property has no natural equivalence when the Poisson process is defined on a space with higher dimensions.
Orderliness and simplicity
A point process with stationary increments is sometimes said to be orderly or regular if:
where little-o notation is being used. A point process is called a simple point process when the probability of any of its two points coinciding in the same position, on the underlying space, is zero. For point processes in general on the real line, the property of orderliness implies that the process is simple, which is the case for the homogeneous Poisson point process.
Martingale characterization
On the real line, the homogeneous Poisson point process has a connection to the theory of martingales via the following characterization: a point process is the homogeneous Poisson point process if and only if
is a martingale.
Relationship to other processes
On the real line, the Poisson process is a type of continuous-time Markov process known as a birth process, a special case of the birth–death process (with just births and zero deaths). More complicated processes with the Markov property, such as Markov arrival processes, have been defined where the Poisson process is a special case.
Restricted to the half-line
If the homogeneous Poisson process is considered just on the half-line , which can be the case when represents time then the resulting process is not truly invariant under translation. In that case the Poisson process is no longer stationary, according to some definitions of stationarity.
Applications
There have been many applications of the homogeneous Poisson process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role in queueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena. For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory.
Generalizations
The homogeneous Poisson process on the real line is considered one of the simplest stochastic processes for counting random numbers of points. This process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as a renewal process. Another generalization is to define the Poisson point process on higher dimensional spaces such as the plane.
Spatial Poisson point process
A spatial Poisson process is a Poisson point process defined in the plane . For its mathematical definition, one first considers a bounded, open or closed (or more precisely, Borel measurable) region of the plane. The number of points of a point process existing in this region is a random variable, denoted by . If the points belong to a homogeneous Poisson process with parameter , then the probability of points existing in is given by:
where denotes the area of .
For some finite integer , we can give the finite-dimensional distribution of the homogeneous Poisson point process by first considering a collection of disjoint, bounded Borel (measurable) sets . The number of points of the point process existing in can be written as . Then the homogeneous Poisson point process with parameter has the finite-dimensional distribution:
Applications
The spatial Poisson point process features prominently in spatial statistics, stochastic geometry, and continuum percolation theory. This point process is applied in various physical sciences such as a model developed for alpha particles being detected. In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks. For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process.
Defined in higher dimensions
The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded region of Euclidean space , if the points form a homogeneous Poisson process with parameter , then the probability of points existing in is given by:
where now denotes the -dimensional volume of . Furthermore, for a collection of disjoint, bounded Borel sets , let denote the number of points of existing in . Then the corresponding homogeneous Poisson point process with parameter has the finite-dimensional distribution:
Homogeneous Poisson point processes do not depend on the position of the underlying space through its parameter , which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process. Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset of , then depending on some definitions of stationarity, the process is no longer stationary.
Points are uniformly distributed
If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval where , then its location will be a uniform random variable defined on that interval. Furthermore, the homogeneous point process is sometimes called the uniform Poisson point process (see Terminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but not in, for example, polar coordinates.
Inhomogeneous Poisson point process
The inhomogeneous or nonhomogeneous Poisson point process (see Terminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean space , this is achieved by introducing a locally integrable positive function , such that for every bounded region the (-dimensional) volume integral of over region is finite. In other words, if this integral, denoted by , is:
where is a (-dimensional) volume element, then for every collection of disjoint bounded Borel measurable sets , an inhomogeneous Poisson process with (intensity) function has the finite-dimensional distribution:
Furthermore, has the interpretation of being the expected number of points of the Poisson process located in the bounded region , namely
Defined on the real line
On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbers and , where , denote by the number points of an inhomogeneous Poisson process with intensity function occurring in the interval . The probability of points existing in the above interval is given by:
where the mean or intensity measure is:
which means that the random variable is a Poisson random variable with mean .
A feature of the one-dimension setting, is that an inhomogeneous Poisson process can be transformed into a homogeneous by a monotone transformation or mapping, which is achieved with the inverse of .
Counting process interpretation
The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as , represents the total number of occurrences or events that have happened up to and including time . A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties:
has independent increments;
and
where is asymptotic or little-o notation for as .
In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies: .
The above properties imply that is a Poisson random variable with the parameter (or mean)
which implies
Spatial Poisson process
An inhomogeneous Poisson process defined in the plane is called a spatial Poisson process It is defined with intensity function and its intensity measure is obtained performing a surface integral of its intensity function over some region. For example, its intensity function (as a function of Cartesian coordinates and ) can be
so the corresponding intensity measure is given by the surface integral
where is some bounded region in the plane .
In higher dimensions
In the plane, corresponds to a surface integral while in the integral becomes a (-dimensional) volume integral.
Applications
When the real line is interpreted as time, the inhomogeneous process is used in the fields of counting processes and in queueing theory. Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include:
Goals being scored in a soccer game.
Defects in a circuit board
In the plane, the Poisson point process is important in the related disciplines of stochastic geometry and spatial statistics. The intensity measure of this point process is dependent on the location of underlying space, which means it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density. This processes has been used in various disciplines and uses include the study of salmon and sea lice in the oceans, forestry, and search problems.
Interpretation of the intensity function
The Poisson intensity function has an interpretation, considered intuitive, with the volume element in the infinitesimal sense: is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volume located at .
For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of width is approximately . In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived.
Simple point process
If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is a simple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space.
Simulation
Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulation window, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated.
Step 1: Number of points
The number of points in the window, denoted here by , needs to be simulated, which is done by using a (pseudo)-random number generating function capable of simulating Poisson random variables.
Homogeneous case
For the homogeneous case with the constant , the mean of the Poisson random variable is set to where is the length, area or (-dimensional) volume of .
Inhomogeneous case
For the inhomogeneous case, is replaced with the (-dimensional) volume integral
Step 2: Positioning of points
The second stage requires randomly placing the points in the window .
Homogeneous case
For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or interval . For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the window . If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed in , and suitable change of coordinates (from Cartesian) are needed.
Inhomogeneous case
For the inhomogeneous case, a couple of different methods can be used depending on the nature of the intensity function . If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinates and ), implying it is rotationally variant or independent of but dependent on , by a change of variable in if the intensity function is sufficiently simple.
For more complicated intensity functions, one can use an acceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio:.
where is the point under consideration for acceptance or rejection.
That is, a location is uniformly randomly selected for consideration, then to determine whether to place a sample at that location a uniformly randomly drawn number in is compared to the probability density function , accepting if it is smaller than the probability density function, and repeating until the previously chosen number of samples have been drawn.
General Poisson point process
In measure theory, the Poisson point process can be further generalized to what is sometimes known as the general Poisson point process or general Poisson process by using a Radon measure , which is a locally finite measure. In general, this Radon measure can be atomic, which means multiple points of the Poisson point process can exist in the same location of the underlying space. In this situation, the number of points at is a Poisson random variable with mean . But sometimes the converse is assumed, so the Radon measure is diffuse or non-atomic.
A point process is a general Poisson point process with intensity if it has the two following properties:
the number of points in a bounded Borel set is a Poisson random variable with mean . In other words, denote the total number of points located in by , then the probability of random variable being equal to is given by:
the number of points in disjoint Borel sets forms independent random variables.
The Radon measure maintains its previous interpretation of being the expected number of points of located in the bounded region , namely
Furthermore, if is absolutely continuous such that it has a density (which is the Radon–Nikodym density or derivative) with respect to the Lebesgue measure, then for all Borel sets it can be written as:
where the density is known, among other terms, as the intensity function.
History
Poisson distribution
Despite its name, the Poisson point process was neither discovered nor studied by its namesake. It is cited as an example of Stigler's law of eponymy. The name arises from the process's inherent relation to the Poisson distribution, derived by Poisson as a limiting case of the binomial distribution. It describes the probability of the sum of Bernoulli trials with probability , often likened to the number of heads (or tails) after biased coin flips with the probability of a head (or tail) occurring being . For some positive constant , as increases towards infinity and decreases towards zero such that the product is fixed, the Poisson distribution more closely approximates that of the binomial.
Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in the limit of (to zero) and (to infinity). It only appears once in all of Poisson's work, and the result was not well known during his time. Over the following years others used the distribution without citing Poisson, including Philipp Ludwig von Seidel and Ernst Abbe.
At the end of the 19th century, Ladislaus Bortkiewicz studied the distribution, citing Poisson, using real data on the number of deaths from horse kicks in the Prussian army.
Discovery
There are a number of claims for early uses or discoveries of the Poisson point process. For example, John Michell in 1767, a decade before Poisson was born, was interested in the probability a star being within a certain region of another star under the erroneous assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brightest stars in the Pleiades, without deriving the Poisson distribution. This work inspired Simon Newcomb to study the problem and to calculate the Poisson distribution as an
approximation for the binomial distribution in 1860.
At the beginning of the 20th century the Poisson process (in one dimension) would arise independently in different situations.
In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.
In Denmark A.K. Erlang derived the Poisson distribution in 1909 when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang unaware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent of each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.
In 1910 Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Their experimental work had mathematical contributions from Harry Bateman, who derived Poisson probabilities as a solution to a family of differential equations, though the solution had been derived earlier, resulting in the independent discovery of the Poisson process. After this time, there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.
Early applications
The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and others working in the physical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used. For example, in 1922 Swedish chemist and Nobel Laureate Theodor Svedberg proposed a model in which a spatial Poisson point process is the underlying process to study how plants are distributed in plant communities. A number of mathematicians started studying the process in the early 1930s, and important contributions were made by Andrey Kolmogorov, William Feller and Aleksandr Khinchin, among others. In the field of teletraffic engineering, mathematicians and statisticians studied and used Poisson and other point processes.
History of terms
The Swede Conny Palm in his 1943 dissertation studied the Poisson and other point processes in the one-dimensional setting by examining them in terms of the statistical or stochastic dependence between the points in time. In his work exists the first known recorded use of the term point processes as Punktprozesse in German.
It is believed that William Feller was the first in print to refer to it as the Poisson process in a 1940 paper. Although the Swede Ove Lundberg used the term Poisson process in his 1940 PhD dissertation, in which Feller was acknowledged as an influence, it has been claimed that Feller coined the term before 1940. It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use by then. Feller worked from 1936 to 1939 alongside Harald Cramér at Stockholm University, where Lundberg was a PhD student under Cramér who did not use the term Poisson process in a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the term Poisson process was coined sometime between 1936 and 1939 at the Stockholm University.
Terminology
The terminology of point process theory in general has been criticized for being too varied. In addition to the word point often being omitted, the homogeneous Poisson (point) process is also called a stationary Poisson (point) process, as well as uniform Poisson (point) process. The inhomogeneous Poisson point process, as well as being called nonhomogeneous, is also referred to as the non-stationary Poisson process.
The term point process has been criticized, as the term process can suggest over time and space, so random point field, resulting in the terms Poisson random point field or Poisson point field being also used. A point process is considered, and sometimes called, a random counting measure, hence the Poisson point process is also referred to as a Poisson random measure, a term used in the study of Lévy processes, but some choose to use the two terms for Poisson points processes defined on two different underlying spaces.
The underlying mathematical space of the Poisson point process is called a carrier space, or state space, though the latter term has a different meaning in the context of stochastic processes. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line, which corresponds to the index set or parameter set in stochastic process terminology.
The measure is called the intensity measure, mean measure, or parameter measure, as there are no standard terms. If has a derivative or density, denoted by , is called the intensity function of the Poisson point process. For the homogeneous Poisson point process, the derivative of the intensity measure is simply a constant , which can be referred to as the rate, usually when the underlying space is the real line, or the intensity. It is also called the mean rate or the mean density or rate . For , the corresponding process is sometimes referred to as the standard Poisson (point) process.
The extent of the Poisson point process is sometimes called the exposure.
Notation
The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation is used to represent the Poisson process.
Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notation , implying that is a random point belonging to or being an element of the Poisson point process . Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point process being found or located in some (Borel measurable) region as , which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory.
For general point processes, sometimes a subscript on the point symbol, for example , is included so one writes (with set notation) instead of , and can be used for the bound variable in integral expressions such as Campbell's theorem, instead of denoting random points. Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the point or belongs to or is a point of the point process , and be written with set notation as or .
Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point process defined on the Euclidean state space and a (measurable) function on , the expression
demonstrates two different ways to write a summation over a point process (see also Campbell's theorem (probability)). More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation.
Functionals and moment measures
In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem. In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively.
Laplace functionals
For a Poisson point process with intensity measure on some space , the Laplace functional is given by:
One version of Campbell's theorem involves the Laplace functional of the Poisson point process.
Probability generating functionals
The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded function on such that . For a point process the probability generating functional is defined as:
where the product is performed for all the points in . If the intensity measure of is locally finite, then the is well-defined for any measurable function on . For a Poisson point process with intensity measure the generating functional is given by:
which in the homogeneous case reduces to
Moment measure
For a general Poisson point process with intensity measure the first moment measure is its intensity measure:
which for a homogeneous Poisson point process with constant intensity means:
where is the length, area or volume (or more generally, the Lebesgue measure) of .
The Mecke equation
The Mecke equation characterizes the Poisson point process. Let be the space of all -finite measures on some general space . A point process with intensity on is a Poisson point process if and only if for all measurable functions the following holds
For further details see.
Factorial moment measure
For a general Poisson point process with intensity measure the -th factorial moment measure is given by the expression:
where is the intensity measure or first moment measure of , which for some Borel set is given by
For a homogeneous Poisson point process the -th factorial moment measure is simply:
where is the length, area, or volume (or more generally, the Lebesgue measure) of . Furthermore, the -th factorial moment density is:
Avoidance function
The avoidance function or void probability of a point process is defined in relation to some set , which is a subset of the underlying space , as the probability of no points of existing in . More precisely, for a test set , the avoidance function is given by:
For a general Poisson point process with intensity measure , its avoidance function is given by:
Rényi's theorem
Simple point processes are completely characterized by their void probabilities. In other words, complete information of a simple point process is captured entirely in its void probabilities, and two simple point processes have the same void probabilities if and if only if they are the same point processes. The case for Poisson process is sometimes known as Rényi's theorem, which is named after Alfréd Rényi who discovered the result for the case of a homogeneous point process in one-dimension.
In one form, the Rényi's theorem says for a diffuse (or non-atomic) Radon measure on and a set is a finite union of rectangles (so not Borel) that if is a countable subset of such that:
then is a Poisson point process with intensity measure .
Point process operations
Mathematical operations can be performed on point processes to get new point processes and develop new mathematical models for the locations of certain objects. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, creating a new process with the remaining points (the deleted points also form a point process).
Thinning
For the Poisson process, the independent -thinning operations results in another Poisson point process. More specifically, a -thinning operation applied to a Poisson point process with intensity measure gives a point process of removed points that is also Poisson point process with intensity measure , which for a bounded Borel set is given by:
This thinning result of the Poisson point process is sometimes known as Prekopa's theorem. Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure
The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other. In other words, if a region is known to contain kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known as splitting the Poisson point process.
Superposition
If there is a countable collection of point processes , then their superposition, or, in set theory language, their union, which is
also forms a point process. In other words, any points located in any of the point processes will also be located in the superposition of these point processes .
Superposition theorem
The superposition theorem of the Poisson point process says that the superposition of independent Poisson point processes with mean measures will also be a Poisson point process with mean measure
In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a point is sampled from a countable union of Poisson processes, then the probability that the point belongs to the th Poisson process is given by:
For two homogeneous Poisson processes with intensities , the two previous expressions reduce to
and
Clustering
The operation clustering is performed when each point of some point process is replaced by another (possibly different) point process. If the original process is a Poisson point process, then the resulting process is called a Poisson cluster point process.
Random displacement
A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement or translation. The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem, which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process.
Displacement theorem
One version of the displacement theorem involves a Poisson point process on with intensity function . It is then assumed the points of are randomly displaced somewhere else in so that each point's displacement is independent and that the displacement of a point formerly at is a random vector with a probability density . Then the new point process is also a Poisson point process with intensity function
If the Poisson process is homogeneous with and if is a function of , then
In other words, after each random and independent displacement of points, the original Poisson point process still exists.
The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean space to another Euclidean space , where is not necessarily equal to .
Mapping
Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space.
Mapping theorem
If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as the mapping theorem. The theorem involves some Poisson point process with mean measure on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measure .
More specifically, one can consider a (Borel measurable) function that maps a point process with intensity measure from one space , to another space in such a manner so that the new point process has the intensity measure:
with no atoms, where is a Borel set and denotes the inverse of the function . If is a Poisson point process, then the new process is also a Poisson point process with the intensity measure .
Approximations with Poisson point processes
The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate both the number of points of some point process and the location of each point by a Poisson point process. There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics.
Clumping heuristic
One method for approximating random events or phenomena with Poisson processes is called the clumping heuristic. The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to being independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters or clumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable and the locations of the clumps will be close to a Poisson process.
Stein's method
Stein's method is a mathematical technique originally developed for approximating random variables such as Gaussian and Poisson variables, which has also been applied to point processes. Stein's method can be used to derive upper bounds on probability metrics, which give way to quantify how different two random mathematical objects vary stochastically. Upperbounds on probability metrics such as total variation and Wasserstein distance have been derived.
Researchers have applied Stein's method to Poisson point processes in a number of ways, such as using Palm calculus. Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certain point process operations such as thinning and superposition. Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as the Cox point process, which is a Poisson process with a random intensity measure.
Convergence to a Poisson point process
In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process.
Similar convergence results have been developed for thinning and superposition operations that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin equations, which has its origins in the work of Conny Palm and Aleksandr Khinchin, and help explains why the Poisson process can often be used as a mathematical model of various random phenomena.
Generalizations of Poisson point processes
The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena.
Poisson-type random measures
The Poisson-type random measures (PT) are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under Point process operation#Thinning. These random measures are examples of the mixed binomial process and share the distributional self-similarity property of the Poisson random measure. They are the only members of the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The Poisson random measure is independent on disjoint subspaces, whereas the other PT random measures (negative binomial and binomial) have positive and negative covariances. The PT random measures are discussed and include the Poisson random measure, negative binomial random measure, and binomial random measure.
Poisson point processes on more general spaces
For mathematical models the Poisson point process is often defined in Euclidean space, but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures, which requires an understanding of mathematical fields such as probability theory, measure theory and topology.
In general, the concept of distance is of practical interest for applications, while topological structure is needed for Palm distributions, meaning that point processes are usually defined on mathematical spaces with metrics. Furthermore, a realization of a point process can be considered as a counting measure, so points processes are types of random measures known as random counting measures. In this context, the Poisson and other point processes have been studied on a locally compact second countable Hausdorff space.
Cox point process
A Cox point process, Cox process or doubly stochastic Poisson process is a generalization of the Poisson point process by letting its intensity measure to be also random and independent of the underlying Poisson process. The process is named after David Cox who introduced it in 1955, though other Poisson processes with random intensities had been independently introduced earlier by Lucien Le Cam and Maurice Quenouille. The intensity measure may be a realization of random variable or a random field. For example, if the logarithm of the intensity measure is a Gaussian random field, then the resulting process is known as a log Gaussian Cox process. More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit a clustering of points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics and wireless networks.
Marked Poisson point process
For a given point process, each random point of a point process can have a random mathematical object, known as a mark, randomly assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes. The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form a marked point process. It is often assumed that the random marks are independent of each other and identically distributed, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space. If the underlying point process is a Poisson point process, then the resulting point process is a marked Poisson point process.
Marking theorem
If a general point process is defined on some mathematical space and the random marks are defined on another mathematical space, then the marked point process is defined on the Cartesian product of these two spaces. For a marked Poisson point process with independent and identically distributed marks, the marking theorem states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the two mathematical spaces, which is not true for general point processes.
Compound Poisson point process
The compound Poisson point process or compound Poisson process is formed by adding random values or weights to each point of Poisson point process defined on some underlying space, so the process is constructed from a marked Poisson point process, where the marks form a collection of independent and identically distributed non-negative random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space.
If there is a marked Poisson point process formed from a Poisson point process (defined on, for example, ) and a collection of independent and identically distributed non-negative marks such that for each point of the Poisson process there is a non-negative random variable , the resulting compound Poisson process is then:
where is a Borel measurable set.
If general random variables take values in, for example, -dimensional Euclidean space , the resulting compound Poisson process is an example of a Lévy process provided that it is formed from a homogeneous Point process defined on the non-negative numbers .
Failure process with the exponential smoothing of intensity functions
The failure process with the exponential smoothing of intensity functions (FP-ESI) is an extension of the nonhomogeneous Poisson process. The intensity function of an FP-ESI is an exponential smoothing function of the intensity functions at the last time points of event occurrences and outperforms other nine stochastic processes on 8 real-world failure datasets when the models are used to fit the datasets, where the model performance is measured in terms of AIC (Akaike information criterion) and BIC (Bayesian information criterion).
See also
Boolean model (probability theory)
Continuum percolation theory
Compound Poisson process
Cox process
Point process
Stochastic geometry
Stochastic geometry models of wireless networks
Markovian arrival processes
Notes
References
Specific
General
Books
Articles
Point processes
Markov processes
Spatial processes
Lévy processes | Poisson point process | [
"Mathematics"
] | 10,249 | [
"Point processes",
"Point (geometry)",
"Poisson point processes"
] |
41,600,720 | https://en.wikipedia.org/wiki/Diffuse%20series | The diffuse series is a series of spectral lines in the atomic emission spectrum caused when electrons jump between the lowest p orbital and d orbitals of an atom. The total orbital angular momentum changes between 1 and 2. The spectral lines include some in the visible light, and may extend into ultraviolet or near infrared. The lines get closer and closer together as the frequency increases never exceeding the series limit. The diffuse series was important in the development of the understanding of electron shells and subshells in atoms. The diffuse series has given the letter d to the d atomic orbital or subshell.
The diffuse series has values given by
The series is caused by transitions from the lowest P state to higher energy D orbitals.
One terminology to identify the lines is: 1P-mD But note that 1P just means the lowest P state in the valence shell of an atom and that the modern designation would start at 2P, and is larger for higher atomic numbered atoms.
The terms can have different designations, mD for single line systems, mδ for doublets and md for triplets.
Since the Electron in the D subshell state is not the lowest energy level for the alkali atom (the S is) the diffuse series will not show up as absorption in a cool gas, however it shows up as emission lines.
The Rydberg correction is largest for the S term as the electron penetrates the inner core of electrons more.
The limit for the series corresponds to electron emission, where the electron has so much energy it escapes the atom.
In alkali metals the P terms are split and . This causes the spectral lines to be doublets, with a constant spacing between the two parts of the double line.
This splitting is called fine structure. The splitting is larger for atoms with higher atomic number. The splitting decreases towards the series limit. Another splitting occurs on the redder line of the doublet. This is because of splitting in the D level and . Splitting in the D level has a lesser amount than the P level, and it reduces as the series limit is approached.
History
The diffuse series used to be called the first subordinate series, with the sharp series being the second subordinate, both being subordinate to (less intense than) the principal series.
Laws for alkali metals
The diffuse series limit is the same as the sharp series limit. In the late 1800s these two were termed supplementary series.
Spectral lines of the diffuse series are split into three lines in what is called fine structure. These lines cause the overall line to look diffuse. The reason this happens is that both the P and D levels are split into two closely spaced energies. P is split into . D is split into . Only three of the possible four transitions can take place because the angular momentum change cannot have a magnitude greater than one.
In 1896 Arthur Schuster stated his law: "If we subtract the frequency of the fundamental vibration from the convergence frequency of the principal series , we obtain the convergence frequency of the supplementary series". But in the next issue of the journal he realised that Rydberg had published the idea a few months earlier.
Rydberg Schuster Law: Using wave numbers, the difference between the diffuse and sharp series limits and principal series limit is the same as the first transition in the principal series.
This difference is the lowest P level.
Runge's Law: Using wave numbers the difference between the diffuse series limit and fundamental series limit is the same as the first transition in the diffuse series.
This difference is the lowest D level energy.
Lithium
Lithium has a diffuse series with diffuse lines averaged around 6103.53, 4603.0, 4132.3, 3915.0 and 3794.7 Å.
Sodium
The sodium diffuse series has wave numbers given by:
The sharp series has wave numbers given by:
when n tends to infinity the diffuse and sharp series end up with the same limit.
Potassium
Alkaline earths
A diffuse series of triplet lines is designated by series letter d and formula 1p-md. The diffuse series of singlet lines has series letter S and formula 1P-mS.
Helium
Helium is in the same category as alkaline earths with respect to spectroscopy, as it has two electrons in the S subshell as do the other alkaline earths.
Helium has a diffuse series of doublet lines with wavelengths 5876, 4472 and 4026 Å. Helium when ionised is termed HeII and has a spectrum very similar to hydrogen but shifted to shorter wavelengths. This has a diffuse series as well with wavelengths at 6678, 4922 and 4388 Å.
Magnesium
Magnesium has a diffuse series of triplets and a sharp series of singlets.
Calcium
Calcium has a diffuse series of triplets and a sharp series of singlets.
Strontium
With strontium vapour, the most prominent lines are from the diffuse series.
Barium
Barium has a diffuse series running from infrared to ultraviolet with wavelengths at 25515.7, 23255.3, 22313.4; 5818.91, 5800.30, 5777.70; 4493.66, 4489.00; 4087.31, 4084.87; 3898.58, 3894.34; 3789.72, 3788.18; 3721.17, and 3720.85 Å
History
At Cambridge University George Liveing and James Dewar set out to systematically measure spectra of elements from groups I, II and III in visible light and longer wave ultraviolet that would transmit through air. They noticed that lines for sodium were alternating sharp and diffuse. They were the first to use the term "diffuse" for the lines. They classified alkali metal spectral lines into sharp and diffuse categories. In 1890 the lines that also appeared in the absorption spectrum were termed the principal series. Rydberg continued the use of sharp and diffuse for the other lines, whereas Kayser and Runge preferred to use the term first subordinate series for the diffuse series.
Arno Bergmann found a fourth series in infrared in 1907, and this became known as Bergmann Series or fundamental series.
Heinrich Kayser, Carl Runge and Johannes Rydberg found mathematical relations between the wave numbers of emission lines of the alkali metals.
Friedrich Hund introduced the s, p, d, f notation for subshells in atoms. Others followed this use in the 1930s and the terminology has remained to this day.
References
Spectroscopy
Atomic physics
Emission spectroscopy | Diffuse series | [
"Physics",
"Chemistry"
] | 1,320 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Emission spectroscopy",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
41,602,395 | https://en.wikipedia.org/wiki/ORCA%20%28quantum%20chemistry%20program%29 | ORCA is a general-purpose quantum chemistry package featuring a variety of methods including semi-empirical, density functional theory, many-body perturbation, coupled cluster, and multireference methods. ORCA provides an easy-to-learn input structure and thus high accessibility of quantum chemical approaches and workflows. The ORCA program package is mainly developed by Frank Neese, the department of molecular theory and spectroscopy at the Max-Planck-Institut für Kohlenforschung (MPI KoFo), and the FACCTs GmbH which also manages commercial licensing to industry. ORCA is generally freely available for academic use.
History
The development of ORCA started in 1997, while Frank Neese was on his PostDoc at Stanford University. Since then the ORCA development went on, following Neese to his stations at the University of Bonn, the Max-Planck-Institute for Chemical Energy Conversion, and finally the Max-Planck-Institut für Kohlenforschung. Since then, the ORCA development team grew constantly involving the whole department of molecular theory and spectroscopy at the MPI KoFo and various external academic developers contributing to ORCA.
In 2016, Frank Neese co-founded the FACCTs GmbH as a spin-off of the Max-Planck-Society to commercially license the ORCA program package to industry. In contrast to many other commercialized quantum chemistry programs, ORCA remains freely available for academic use.
Since its first release, the number of active users and developers grew steadily peaking in 67000 registered users and 3300 citations to ORCA in 2023.
Selected Features
Hartree-Fock Theory
Efficient DFT and TDDFT implementation featuring RIJCOSX
MPn perturbation theory
Coupled-Cluster and local Coupled Cluster (DLPNO-CCSD(T))
Semiempirical methods
Multiscale methods including QM/MM
Release History
Beginning with version 4.0, only major and feature releases are shown.
1.0.0: 1997 (no public release)
2.0.0: Sep. 1999
2.4.0: 2004
2.6.0: 2006
2.7.0: 2007
2.9.0: 2008
3.0.0: 2011
3.0.2: Jun. 2014
3.0.3: Dec. 2014
4.0.0: Mar. 2017
4.1.0: Dec. 2018
4.2.0: Aug. 2019
5.0.0: Jul. 2021
6.0.0: Jul. 2024
Graphic interfaces
Avogadro
Chemcraft
UCSF ChimeraX
Molden
Ascalaph Designer
Gabedit
See also
List of quantum chemistry and solid-state physics software
References
External links
Official commercial website (FACCTs)
ORCA 6 manual
ORCA 6 tutorials
Computational chemistry software
Quantum chemistry | ORCA (quantum chemistry program) | [
"Physics",
"Chemistry"
] | 582 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Computational chemistry software",
"Chemistry software",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
"Computational chemistry stubs",
" molecular",
"Atomic",
"Physical chemistry stubs",... |
5,370,929 | https://en.wikipedia.org/wiki/Gluhareff%20Pressure%20Jet | The Gluhareff Pressure Jet (or tip jet) is a type of jet engine that, like a valveless pulse jet, has no moving parts. It was invented by Eugene Michael Gluhareff, a Russian-American engineer who envisioned it as a power plant for personal helicopters and compact aircraft such as microlights.
Mechanism
Having no moving parts, the engine works by having a coiled pipe in the combustion chamber that superheats the fuel (propane) before being injected into the air-fuel inlet. In the combustion chamber, the fuel/air mixture ignites and burns, creating thrust as it leaves through the exhaust pipe. Induction and compression of the fuel/air mixture is done both by the pressure of propane as it is injected, along with the sound waves created by combustion acting on the intake stacks.
The engine has three intake stages, which are sized according to the sound created by the combustion process when running. This has exactly the same effect as the turbine and compressor in a turbojet, creating a vacuum that sucks in air. The intakes, along with the exhaust, are sonically tuned so that the locations of the pressure antinodes of the Mach disks in the propane stream match the locations of the intake apertures. Thus atmospheric pressure augments air intake as much as possible. Early prototypes produced very small amounts of thrust, before Gluharev developed it from early experiments on producing thrust from using the pressurized fuel's kinetic energy to suck in the air and compress it prior to combustion.
A 1949 reference to a very similar concept exists. Although described as a ram jet, this version heats the fuel within a closed space to create the pressure for injection and compression of the entrained air in a similar manner to the Gluhareff design and is in all fundamental respects a pressure jet of the same type.
Advantages
No moving parts, meaning very little wear.
Simple throttling, via a valve in the fuel line.
Clean burning and very low emissions, especially as it is designed to use propane which burns very cleanly.
It is possible that the engine can be built at home from plans or a kit. These are already commercially available kits.
Simple design means that engines can be incorporated into either a helicopter's rotor blades or a fixed-wing aircraft's wings or tailfins.
Disadvantages
Engines need to be sonically tuned for maximum efficiency.
Noise is very similar to that of a pulse jet engine, which could cause discomfort for passengers and people on the ground.
Very high engine temperatures are a problem (the engine can glow bright orange which creates an obvious material problem).
Difficult to mount due to operating temperatures, intake valve assembly, and fuel supply.
See also
Valveless pulsejet
Pulsejet
Pulse detonation engine
Jet engine
Rocket engine
References
External links
A company selling plans for this engine, which can be built at home.
Gluhareff Helicopter company website.
Jet engines
Engines | Gluhareff Pressure Jet | [
"Physics",
"Technology"
] | 601 | [
"Physical systems",
"Jet engines",
"Machines",
"Engines"
] |
5,372,362 | https://en.wikipedia.org/wiki/Desvenlafaxine | Desvenlafaxine, sold under the brand name Pristiq among others, is a medication used to treat depression. It is an antidepressant of the serotonin-norepinephrine reuptake inhibitor (SNRI) class and is taken by mouth. It is recommended that the need for further treatment be occasionally reassessed. It may be less effective than its parent compound venlafaxine, although some studies have found comparable efficacy.
Common side effects include dizziness, trouble sleeping, increased sweating, constipation, sleepiness, anxiety, and sexual problems. Serious side effects may include suicide in those under the age of 25, serotonin syndrome, bleeding, mania, and high blood pressure. There is a high risk of withdrawal syndrome which may occur if the dose is decreased or the medication is completely stopped. It is unclear if use during pregnancy or breastfeeding is safe.
Desvenlafaxine was approved for medical use in the United States in 2008. In Europe its application for use was denied in 2009. In 2022, it was the 208th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Medical uses
Desvenlafaxine is primarily used as a treatment for major depressive disorder. Use has only been studied up to 8 weeks. It may be less effective than venlafaxine, although some studies have found comparable efficacy with a lower rate of nausea.
Doses of 50 to 400 mg/day appear effective for major depressive disorder, although no additional benefit was demonstrated at doses greater than 50 mg/day, and adverse events and discontinuations were more frequent at higher doses.
Desvenlafaxine improves the HAM-D17 score and measures of well-being such as the Sheehan Disability Scale (SDS) and 5-item World Health Organization Well-Being Index (WHO-5).
Adverse effects
Frequency of adverse effects:
Very common adverse effects include:
Nausea
Headache
Dizziness
Dry mouth
Hyperhidrosis
Diarrhea
Insomnia
Constipation
Fatigue
Common adverse effects include:
Tremor
Blurred vision
Mydriasis
Decreased appetite
Sexual dysfunction
Insomnia
Anxiety
Elevated cholesterol and triglycerides
Proteinuria
Vertigo
Feeling jittery
Asthenia
Nervousness
Hot flush
Irritability
Abnormal dreams
Urinary hesitation
Yawning
Rash
Uncommon adverse effects include:
Hypersensitivity
Syncope
Depersonalization
Hypomania
Withdrawal syndrome
Urinary retention
Epistaxis (nose bleed)
Alopecia (hair loss)
Orthostatic hypotension
Peripheral coldness
Rare adverse effects include:
Hyponatraemia (low blood sodium)
Seizures
Extrapyramidal side effects
Hallucinations
Angioedema
Photosensitivity reaction
Stevens–Johnson syndrome
Common adverse effects whose intensity is unknown include:
Abnormal bleeding (gastrointestinal bleeds)
Narrow-angle glaucoma
Mania
Interstitial lung disease
Eosinophilic pneumonia
Hypertension
Suicidal behavior and thoughts
Serotonin syndrome
Pharmacology
Desvenlafaxine is a synthetic form of the isolated major active metabolite of venlafaxine, and is categorized as a serotonin-norepinephrine reuptake inhibitor (SNRI). When most normal metabolizers take venlafaxine, approximately 70% of the dose is metabolized into desvenlafaxine, so the effects of the two drugs are expected to be very similar. It works by blocking the "reuptake" transporters for key neurotransmitters affecting mood, thereby leaving more active neurotransmitters in the synapse. The neurotransmitters affected are serotonin (5-hydroxytryptamine) and norepinephrine (noradrenaline). It is approximately 10 times more potent at inhibiting serotonin uptake than norepinephrine uptake.
Approval status
United States
Wyeth announced on 23 January 2007 that it received an approvable letter from the Food and Drug Administration for desvenlafaxine. Final approval to sell the drug was contingent on a number of things, including:
A satisfactory FDA inspection of Wyeth's Guayama, Puerto Rico facility, where the drug is to be manufactured;
Several postmarketing surveillance commitments, and follow-up studies on low-dose use, relapse, and use in children;
Clarity by Wyeth around the company's product education plan for physicians and patients;
Approval of desvenlafaxine's proprietary name, Pristiq.
The FDA approved the drug for antidepressant use in February 2008, and was to be available in US pharmacies in May 2008.
In March 2017, the generic form of the drug was made available in the US.
Canada
On February 4, 2009, Health Canada approved use of desvenlafaxine for treatment of depression.
European Union
In 2009, an application to market desvenlafaxine for major depressive disorder in the European Union was declined. In 2012, Pfizer received authorization in Spain to market desvenlafaxine for the disorder. In August 2022, following a 14-year approval process, desvenlafaxine was brought to the market for the disorder in Germany.
Australia
Desvenlafaxine is classified as a schedule 4 (prescription only) drug in Australia. It was listed on the Pharmaceutical Benefits Scheme (PBS) in 2008 for the treatment of major depressive disorders.
See also
List of antidepressants
References
Serotonin–norepinephrine reuptake inhibitors
Cyclohexanols
Dimethylamino compounds
Phenethylamines
4-Hydroxyphenyl compounds
Drugs developed by Wyeth
Drugs developed by Pfizer
Tertiary alcohols
Human drug metabolites
Wikipedia medicine articles ready to translate
Antidepressants | Desvenlafaxine | [
"Chemistry"
] | 1,224 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
5,372,508 | https://en.wikipedia.org/wiki/Mr.%20Mystic | Mr. Mystic is a comics series featuring a magician crime-fighter, created by Will Eisner and initially drawn by Bob Powell. The strip featured in four-page backup feature a Sunday-newspaper comic-book insert, known colloquially as "The Spirit Section". It first appeared in 1940, distributed by the Register and Tribune Syndicate.
Mr. Mystic, whose alter ego was an American diplomat named Ken, gained his superpowers in Tibet, where he was chosen by the mysterious Council of Seven Lamas to fight against evil. The council gives him a tattoo of an arcane symbol on his forehead, and he dresses in stage magician clothing: a business suit, with a cape and turban. Among his abilities were the power to transform into animals and to grow to giant or minute sizes. After receiving these gifts, he returns to the United States to fight crime.
Publication history
Along with the series Lady Luck, the Mr. Mystic strip followed the seven-page lead feature The Spirit in a 16-page, tabloid-sized, newsprint comic book sold as part of eventually 20 Sunday newspapers with a combined circulation of as many as five million copies. "The Spirit Section" premiered on June 2, 1940 and continued through 1952.
In 1941, Mr. Mystic had a sharp-tongued fiancee, FBI agent Penny Douglas. Later on, he took on a comedy sidekick named Chowderhead.
Fred Guardineer filled in for Powell on the strip for three weeks in October 1943, but Powell resumed the strip and continued until its end on May 14, 1944.
Unlike the newspaper series The Spirit or Lady Luck, Mr. Mystic was not later reprinted in standard comic books by publisher Quality Comics, and considered the least successful, it was the first of the three series to end.
During the 1970s and 80s, several Mr. Mystic stories were reprinted in the black-and-white magazine The Spirit, during the Kitchen Sink Press portion of the magazine's run. In 1990, Eclipse Comics published a one-shot comic book reprinting the first five Mr. Mystic stories.
Mr. Mystic also appears as a regular character in Will Eisner's John Law: Dead Man Walking (2004, IDW), a collection of stories that features new adventures by writer/artist Gary Chaloner. The book features other Eisner creations including Lady Luck, John Law and Nubbin, the Shoe Shine Boy.
Quotes
References
External links
"Will Eisner's The Spirit"
Original page
Will Eisner's John Law Website (featuring Mr. Mystic)
1940 comics debuts
1944 comics endings
Comics about magic
Comics by Will Eisner
Crime comics
Eclipse Comics titles
Fiction about size change
Kitchen Sink Press titles
Therianthropy | Mr. Mystic | [
"Physics",
"Mathematics"
] | 554 | [
"Fiction about size change",
"Quantity",
"Physical quantities",
"Size"
] |
5,376,872 | https://en.wikipedia.org/wiki/Narratio%20Prima | De libris revolutionum Copernici narratio prima, usually referred to as Narratio Prima (), is an abstract of Nicolaus Copernicus' heliocentric theory, written by Georg Joachim Rheticus in 1540. It is an introduction to Copernicus's major work, De revolutionibus orbium coelestium, published in 1543, largely due to Rheticus's instigation. Narratio Prima is the first printed publication of Copernicus's theory.
History
Copernicus, born in 1473 and already well over 60 years old, had never published any astronomical work, as his only publication had been his translation of poems of Theophylact Simocatta, printed in 1509 by Johann Haller. At the same time, he had distributed his ideas among friends, with manuscripts called Commentariolus. In the 1530s, he was urged to publish by many, yet still hesitated when in 1539, Rheticus arrived in Frauenburg (Frombork) to become Copernicus' first and only pupil. Philipp Melanchthon had arranged for Rheticus to visit several astronomers and study with them.
In September 1539 Rheticus went to Danzig (Gdańsk) to visit the mayor who gave Rheticus some financial assistance to publish the Narratio Prima. This Narratio Prima, published by Franz Rhode in Danzig in 1540, is still considered to be the best introduction to Copernicus' De revolutionibus orbium coelestium. As the full title states, the Narratio was published as an open letter to Johannes Schöner of Nuremberg (Nürnberg). It was bundled together with the Encomium Prussiae which praised the spirit of humanism in Prussia.
During his two-year stay in Prussia, Rheticus published works of his own, and in cooperation with Copernicus, in 1542 a treatise on trigonometry which was a preview to the second book of De revolutionibus. Under strong pressure from Rheticus, and having seen the favorable first general reception of the Narratio Prima, Copernicus finally agreed to give the book to his close friend, bishop Tiedemann Giese, to be delivered to Nuremberg for printing by Johannes Petreius under Rheticus's supervision.
Later editions of Narratio Prima were printed in Basel, in 1541 by Robert Winter, and in 1566 by Henricus Petrus in connection with the second edition of De revolutionibus. In 1597 when Johannes Kepler's first book Mysterium Cosmographicum was prepared for publication in Tübingen, his advisor Michael Maestlin decided to include Rheticus' Narratio Prima following Kepler's text, as a supplementary explanation of heliocentric theory.
References
Bibliography
Rheticus: Narratio prima de libris revolutionum Copernici, Danzig 1540
Richard S. Westfall, Indiana University. Rheticus, George Joachim. "Catalog of the Scientific Community of the 16th and 17th Centuries," The Galileo Project.
Dennis Danielson (2006). The First Copernican: Georg Joachim Rheticus and the Rise of the Copernican Revolution. Walker & Company, New York.
Karl Heinz Burmeister: Georg Joachim Rhetikus 1514–1574. Bd. I–III. Guido Pressler Verlag, Wiesbaden 1967.
Stefan Deschauer: Die Arithmetik-Vorlesung des Georg Joachim Rheticus, Wittenberg 1536: eine kommentierte Edition der Handschrift X-278 (8) der Estnischen Akademischen Bibliothek; Augsburg: Rauner, 2003;
R. Hooykaas: G. J. Rheticus’ Treatise on holy scripture and the motion of the earth / with transl., annotations, commentary and additional chapters on Ramus-Rheticus and the development of the problem before 1650; Amsterdam: North-Holland, 1984
External links
Scienceworld article on Rheticus
Narratio Prima (1540) – scanned edition at Linda Hall Library
in English
Astronomy books
1540 books | Narratio Prima | [
"Astronomy"
] | 875 | [
"Astronomy books",
"Works about astronomy"
] |
5,376,959 | https://en.wikipedia.org/wiki/Monin%E2%80%93Obukhov%20length | The Obukhov length is used to describe the effects of buoyancy on turbulent flows, particularly in the lower tenth of the atmospheric boundary layer. It was first defined by Alexander Obukhov in 1946. It is also known as the Monin–Obukhov length because of its important role in the similarity theory developed by Monin and Obukhov. A simple definition of the Monin-Obukhov length is that height at which turbulence is generated more by buoyancy than by wind shear.
The Obukhov length is defined by
where is the frictional velocity, is the mean virtual potential temperature, is the surface virtual potential temperature flux, k is the von Kármán constant. If not known, the virtual potential temperature flux can be apprioximated with:
where is potential temperature, and is mixing ratio.
By this definition, is usually negative in the daytime since is typically positive during the daytime over land, positive at night when is typically negative, and becomes infinite at dawn and dusk when passes through zero.
A physical interpretation of is given by the Monin–Obukhov similarity theory. During the day is the height at which the buoyant production of turbulence kinetic energy (TKE) is equal to that produced by the shearing action of the wind (shear production of TKE).
References
Atmospheric dispersion modeling
Boundary layer meteorology
Fluid dynamics
Buoyancy
Meteorology in the Soviet Union
Microscale meteorology | Monin–Obukhov length | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 303 | [
"Chemical engineering",
"Atmospheric dispersion modeling",
"Environmental engineering",
"Piping",
"Environmental modelling",
"Fluid dynamics"
] |
35,980,148 | https://en.wikipedia.org/wiki/History%20of%20spectroscopy | Modern spectroscopy in the Western world started in the 17th century. New designs in optics, specifically prisms, enabled systematic observations of the solar spectrum. Isaac Newton first applied the word spectrum to describe the rainbow of colors that combine to form white light. During the early 1800s, Joseph von Fraunhofer conducted experiments with dispersive spectrometers that enabled spectroscopy to become a more precise and quantitative scientific technique. Since then, spectroscopy has played and continues to play a significant role in chemistry, physics and astronomy. Fraunhofer observed and measured dark lines in the Sun's spectrum, which now bear his name although several of them were observed earlier by Wollaston.
Origins and experimental development
The Romans were already familiar with the ability of a prism to generate a rainbow of colors. Newton is traditionally regarded as the founder of spectroscopy, but he was not the first scientist who studied and reported on the solar spectrum. The works of Athanasius Kircher (1646), Jan Marek Marci (1648), Robert Boyle (1664), and Francesco Maria Grimaldi (1665), predate Newton's optics experiments (1666–1672). Newton published his experiments and theoretical explanations of dispersion of light in his Opticks. His experiments demonstrated that white light could be split up into component colors by means of a prism and that these components could be recombined to generate white light. He demonstrated that the prism is not imparting or creating the colors but rather separating constituent parts of the white light. Newton's corpuscular theory of light was gradually succeeded by the wave theory. It was not until the 19th century that the quantitative measurement of dispersed light was recognized and standardized. As with many subsequent spectroscopy experiments, Newton's sources of white light included flames and stars, including the Sun. Subsequent studies of the nature of light include those of Hooke, Huygens, Young. Subsequent experiments with prisms provided the first indications that spectra were associated uniquely with chemical constituents. Scientists observed the emission of distinct patterns of colour when salts were added to alcohol flames.
Early 19th century (1800–1829)
In 1802, William Hyde Wollaston built a spectrometer, improving on Newton's model, that included a lens to focus the Sun's spectrum on a screen. Upon use, Wollaston realized that the colors were not spread uniformly, but instead had missing patches of colors, which appeared as dark bands in the sun's spectrum. At the time, Wollaston believed these lines to be natural boundaries between the colors, but this hypothesis was later ruled out in 1815 by Fraunhofer's work.
Joseph von Fraunhofer made a significant experimental leap forward by replacing a prism with a diffraction grating as the source of wavelength dispersion. Fraunhofer built off the theories of light interference developed by Thomas Young, François Arago and Augustin-Jean Fresnel. He conducted his own experiments to demonstrate the effect of passing light through a single rectangular slit, two slits, and so forth, eventually developing a means of closely spacing thousands of slits to form a diffraction grating. The interference achieved by a diffraction grating both improves the spectral resolution over a prism and allows for the dispersed wavelengths to be quantified. Fraunhofer's establishment of a quantified wavelength scale paved the way for matching spectra observed in multiple laboratories, from multiple sources (flames and the sun) and with different instruments. Fraunhofer made and published systematic observations of the solar spectrum, and the dark bands he observed and specified the wavelengths of are still known as Fraunhofer lines.
Throughout the early 1800s, a number of scientists pushed the techniques and understanding of spectroscopy forward. In the 1820s, both John Herschel and William H. F. Talbot made systematic observations of salts using flame spectroscopy.
Mid-19th century (1830–1869)
In 1835, Charles Wheatstone reported that different metals could be easily distinguished by the different bright lines in the emission spectra of their sparks, thereby introducing an alternative mechanism to flame spectroscopy. In 1849, J. B. L. Foucault experimentally demonstrated that absorption and emission lines appearing at the same wavelength are both due to the same material, with the difference between the two originating from the temperature of the light source. In 1853, the Swedish physicist Anders Jonas Ångström presented observations and theories about gas spectra in his work Optiska Undersökningar (Optical investigations) to the Royal Swedish Academy of Sciences. Ångström postulated that an incandescent gas emits luminous rays of the same wavelength as those it can absorb. Ångström was unaware of Foucalt's experimental results. At the same time George Stokes and William Thomson (Kelvin) were discussing similar postulates. Ångström also measured the emission spectrum from hydrogen later labeled the Balmer lines. In 1854 and 1855, David Alter published observations on the spectra of metals and gases, including an independent observation of the Balmer lines of hydrogen.
The systematic attribution of spectra to chemical elements began in the 1860s with the work of German physicists Robert Bunsen and Gustav Kirchhoff, who found that Fraunhofer lines correspond to emission spectral lines observed in laboratory light sources. This laid way for spectrochemical analysis in laboratory and astrophysical science. Bunsen and Kirchhoff applied the optical techniques of Fraunhofer, Bunsen's improved flame source and a highly systematic experimental procedure to a detailed examination of the spectra of chemical compounds. They established the linkage between chemical elements and their unique spectral patterns. In the process, they established the technique of analytical spectroscopy. In 1860, they published their findings on the spectra of eight elements and identified these elements' presence in several natural compounds. They demonstrated that spectroscopy could be used for trace chemical analysis and several of the chemical elements they discovered were previously unknown. Kirchhoff and Bunsen also definitively established the link between absorption and emission lines, including attributing solar absorption lines to particular elements based on their corresponding spectra. Kirchhoff went on to contribute fundamental research on the nature of spectral absorption and emission, including what is now known as Kirchhoff's law of thermal radiation. Kirchhoff's applications of this law to spectroscopy are captured in three laws of spectroscopy:
An incandescent solid, liquid or gas under high pressure emits a continuous spectrum.
A hot gas under low pressure emits a "bright-line" or emission-line spectrum.
A continuous spectrum source viewed through a cool, low-density gas produces an absorption-line spectrum.
In the 1860s the husband-and-wife team of William and Margaret Huggins used spectroscopy to determine that the stars were composed of the same elements as found on earth. They also used the non-relativistic Doppler shift (redshift) equation on the spectrum of the star Sirius in 1868 to determine its axial speed. They were the first to take a spectrum of a planetary nebula when the Cat's Eye Nebula (NGC 6543) was analyzed. Using spectral techniques, they were able to distinguish nebulae from stars.
August Beer observed a relationship between light absorption and concentration and created the color comparator which was later replaced by a more accurate device called the spectrophotometer.
Late 19th century (1870–1899)
In the 19th century new developments such as the discovery of photography, Rowland's invention of the concave diffraction grating, and Schumann's works on discovery of vacuum ultraviolet (fluorite for prisms and lenses, low-gelatin photographic plates and absorption of UV in air below 185 nm) made advance to shorter wavelengths very fast.
In 1871, Stoney suggested using a wavenumber scale for spectra and Hartley followed up, finding constant wave-number differences in the triplets of zinc.
Liveing and
Dewar observed that alkali spectra appeared to form a series and Alfred Cornu found similar structure in the spectra of thallium and aluminum, setting the stage for Balmer to discover a relation connecting wavelengths in the visible hydrogen spectrum. In 1890, Kayser and Runge organized the series reported by Liveing and Dewar using names like 'Principal', 'diffuse', and 'sharp' series. Rydberg gave a formula for wave-numbers of all spectral series of all the alkalis and hydrogen.
In 1895, the German physicist Wilhelm Conrad Röntgen discovered and extensively studied X-rays, which were later used in X-ray spectroscopy. One year later, in 1896, French physicist Antoine Henri Becquerel discovered radioactivity, and Dutch physicist Pieter Zeeman observed spectral lines being split by a magnetic field.
In 1897, theoretical physicist, Joseph Larmor explained the splitting of the spectral lines in a magnetic field by the oscillation of electrons.
Physicist, Joseph Larmor, created the first solar system model of the atom in 1897. He also postulated the proton, calling it a “positive electron.” He said the destruction of this type of atom making up matter “is an occurrence of infinitely small probability.”
Early 20th century (1900–1950)
The first decade of the 20th century brought the basics of quantum theory (Planck, Einstein) and interpretation of spectral series of hydrogen by Lyman in VUV and by Paschen in infrared. Ritz formulated the combination principle.
John William Nicholson had created an atomic model in 1912, a year before Niels Bohr, that was both nuclear and quantum in which he showed that electron oscillations in his atom matched the solar and nebular spectral lines. Bohr had been working on his atom during this period, but Bohr's model had only a single ground state and no spectra until he incorporated the Nicholson model and referenced the Nicholson papers in his model of the atom.
In 1913, Bohr formulated his quantum mechanical model of atom. This stimulated empirical term analysis. Bohr published a theory of the hydrogen-like atoms that could explain the observed wavelengths of spectral lines due to electrons transitioning from different energy states. In 1937 "E. Lehrer created the first fully-automated spectrometer" to help more accurately measure spectral lines. With the development of more advanced instruments such as photo-detectors scientists were then able to more accurately measure specific wavelength absorption of substances.
Development of quantum mechanics
Between 1920 and 1930 fundamental concepts of quantum mechanics were developed by Pauli, Heisenberg, Schrödinger, and Dirac. Understanding of the spin and exclusion principle allowed conceiving how electron shells of atoms are filled with the increasing atomic number.
Multiply ionized atoms
This branch of spectroscopy deals with radiation related to atoms that are stripped of several electrons (multiply ionized atoms (MIA), multiply charged ions, highly charged ions). These are observed in very hot plasmas (laboratory or astrophysical) or in accelerator experiments (beam-foil, electron beam ion trap (EBIT)). The lowest exited electron shells of such ions decay into stable ground states producing photons in VUV, EUV and soft X-ray spectral regions (so-called resonance transitions).
Structure studies
Further progress in studies of atomic structure was in tight connection with the advance to shorter wavelength in EUV region. Millikan, Sawyer, Bowen used electric discharges in vacuum to observe some emission spectral lines down to 13 nm they prescribed to stripped atoms. In 1927 Osgood and Hoag reported on grazing incidence concave grating spectrographs and photographed lines down to 4.4 nm (Kα of carbon). Dauvillier used a fatty acid crystal of large crystal grating space to extend soft x-ray spectra up to 12.1 nm, and the gap was closed. In the same period Manne Siegbahn constructed a very sophisticated grazing incidence spectrograph that enabled Ericson and Edlén to obtain spectra of vacuum spark with high quality and to reliably identify lines of multiply ionized atoms up to O VI, with five stripped electrons. Grotrian developed his graphic presentation of energy structure of the atoms. Russel and Saunders proposed their coupling scheme for the spin-orbit interaction and their generally recognized notation for spectral terms.
Accuracy
Theoretical quantum-mechanical calculations become rather accurate to describe the energy structure of some simple electronic configurations. The results of theoretical developments were summarized by Condon and Shortley in 1935.
Edlén thoroughly analyzed spectra of MIA for many chemical elements and derived regularities in energy structures of MIA for many isoelectronic sequences (ions with the same number of electrons, but different nuclear charges). Spectra of rather high ionization stages (e.g. Cu XIX) were observed.
The most exciting event was in 1942, when Edlén proved the identification of some solar coronal lines on the basis of his precise analyses of spectra of MIA. This implied that the solar corona has a temperature of a million degrees, and strongly advanced understanding of solar and stellar physics.
After the WW II experiments on balloons and rockets were started to observe the VUV radiation of the Sun. (See X-ray astronomy). More intense research continued since 1960 including spectrometers on satellites.
In the same period the laboratory spectroscopy of MIA becomes relevant as a diagnostic tool for hot plasmas of thermonuclear devices (see Nuclear fusion) which begun with building Stellarator in 1951 by Spitzer, and continued with tokamaks, z-pinches and the laser produced plasmas. Progress in ion accelerators stimulated beam-foil spectroscopy as a means to measure lifetimes of exited states of MIA. Many various data on highly exited energy levels, autoionization and inner-core ionization states were obtained.
Electron beam ion trap
Simultaneously theoretical and computational approaches provided data necessary for identification of new spectra and interpretation of observed line intensities. New laboratory and theoretical data become very useful for spectral observation in space. It was a real upheaval of works on MIA in USA, England, France, Italy, Israel, Sweden, Russia and other countries
A new page in the spectroscopy of MIA may be dated as 1986 with development of EBIT (Levine and Marrs, LLNL) due to a favorable composition of modern high technologies such as cryogenics, ultra-high vacuum, superconducting magnets, powerful electron beams and semiconductor detectors. Very quickly EBIT sources were created in many countries (see NIST summary for many details as well as reviews.)
A wide field of spectroscopic research with EBIT is enabled including achievement of highest grades of ionization (U92+), wavelength measurement, hyperfine structure of energy levels, quantum electrodynamic studies, ionization cross-sections (CS) measurements, electron-impact excitation CS, X-ray polarization, relative line intensities, dielectronic recombination CS, magnetic octupole decay, lifetimes of forbidden transitions, charge-exchange recombination, etc.
Infrared and Raman spectroscopy
Many early scientists who studied the IR spectra of compounds had to develop and build their own instruments to be able to record their measurements making it very difficult to get accurate measurements. During World War II, the U.S. government contracted different companies to develop a method for the polymerization of butadiene to create rubber, but this could only be done through analysis of C4 hydrocarbon isomers. These contracted companies started developing optical instruments and eventually created the first infrared spectrometers. With the development of these commercial spectrometers, Infrared Spectroscopy became a more popular method to determine the "fingerprint" for any molecule. Raman spectroscopy was first observed in 1928 by Sir Chandrasekhara Venkata Raman in liquid substances and also by "Grigory Landsberg and Leonid Mandelstam in crystals". Raman spectroscopy is based on the observation of the raman effect which is defined as "The intensity of the scattered light is dependent on the amount of the polarization potential change". The raman spectrum records light intensity vs. light frequency (wavenumber) and the wavenumber shift is characteristic to each individual compound.
Laser spectroscopy
Laser spectroscopy is a spectroscopic technique that uses lasers to be able determine the emitted frequencies of matter. The laser was invented because spectroscopists took the concept of its predecessor, the maser, and applied it to the visible and infrared ranges of light. The maser was invented by Charles Townes and other spectroscopists to stimulate matter to determine the radiative frequencies that specific atoms and molecules emitted. While working on the maser, Townes realized that more accurate detections were possible as the frequency of the microwave emitted increased. This led to an idea a few years later to use the visible and eventually the infrared ranges of light for spectroscopy that became a reality with the help of Arthur Schawlow. Since then, lasers have gone on to significantly advance experimental spectroscopy. The laser light allowed for much higher precision experiments specifically in the uses of studying collisional effects of light as well as being able to accurately detect specific wavelengths and frequencies of light, allowing for the invention of devices such as laser atomic clocks. Lasers also made spectroscopy that used time methods more accurate by using speeds or decay times of photons at specific wavelengths and frequencies to keep time. Laser spectroscopic techniques have been used for many different applications. One example is using laser spectroscopy to detect compounds in materials. One specific method is called Laser-induced Fluorescence Spectroscopy, and uses spectroscopic methods to be able to detect what materials are in a solid, liquid, or gas, in situ. This allows for direct testing of materials, instead of having to take the material to a lab to figure out what the solid, liquid, or gas is made of.
See also
List of spectroscopists
Mass spectrometry
History of quantum mechanics
References
External links
MIT Spectroscopy Lab's History of Spectroscopy
Spectroscopy Magazine's "A Timeline of Atomic Spectroscopy"
Spectroscopy
Quantum mechanics
History of chemistry
History of physics
Plasma diagnostics
Ionizing radiation | History of spectroscopy | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 3,679 | [
"Ionizing radiation",
"Physical phenomena",
"Molecular physics",
"Spectrum (physical sciences)",
"Plasma physics",
"Instrumental analysis",
"Theoretical physics",
"Quantum mechanics",
"Measuring instruments",
"Plasma diagnostics",
"Radiation",
"Spectroscopy"
] |
35,981,107 | https://en.wikipedia.org/wiki/Molecular-scale%20temperature | The molecular-scale temperature is the defining property of the U.S. Standard Atmosphere, 1962. It is defined by the relationship:
Tm(z) is molecular-scale temperature at altitude z;
M0 is molecular weight of air at sea level;
M(z) is molecular weight of air at altitude z;
T(z) is absolute temperature at altitude z.
This is citation of the Technical Report of USAF from 1967.
References
Atmosphere
Temperature | Molecular-scale temperature | [
"Physics",
"Chemistry"
] | 93 | [
"Scalar physical quantities",
"Thermodynamic properties",
"Temperature",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
35,987,396 | https://en.wikipedia.org/wiki/Gelfand%E2%80%93Kirillov%20dimension | In algebra, the Gelfand–Kirillov dimension (or GK dimension) of a right module M over a k-algebra A is:
where the supremum is taken over all finite-dimensional subspaces and .
An algebra is said to have polynomial growth if its Gelfand–Kirillov dimension is finite.
Basic facts
The Gelfand–Kirillov dimension of a finitely generated commutative algebra A over a field is the Krull dimension of A (or equivalently the transcendence degree of the field of fractions of A over the base field.)
In particular, the GK dimension of the polynomial ring Is n.
(Warfield) For any real number r ≥ 2, there exists a finitely generated algebra whose GK dimension is r.
In the theory of D-Modules
Given a right module M over the Weyl algebra , the Gelfand–Kirillov dimension of M over the Weyl algebra coincides with the dimension of M, which is by definition the degree of the Hilbert polynomial of M. This enables to prove additivity in short exact sequences for the Gelfand–Kirillov dimension and finally to prove Bernstein's inequality, which states that the dimension of M must be at least n. This leads to the definition of holonomic D-modules as those with the minimal dimension n, and these modules play a great role in the geometric Langlands program.
Notes
References
Coutinho: A primer of algebraic D-modules. Cambridge, 1995
Further reading
Abstract algebra
Dimension | Gelfand–Kirillov dimension | [
"Physics",
"Mathematics"
] | 322 | [
"Geometric measurement",
"Algebra stubs",
"Physical quantities",
"Theory of relativity",
"Abstract algebra",
"Dimension",
"Algebra"
] |
44,545,880 | https://en.wikipedia.org/wiki/Acousto-electric%20effect | Acousto-electric effect is a nonlinear phenomenon of generation of electric current in a piezo-electric semiconductor by a propagating acoustic wave. The generated electric current is proportional to the intensity of the acoustic wave and to the value of its electron-induced attenuation. The effect was theoretically predicted in 1953 by Parmenter. Its first experimental observation was reported in 1957 by Weinreich and White.
Valley acoustoelectric effect
There are two varieties of the original acousto-electric effect called the valley acoustoelectric effect and valley acoustoelectric Hall effect theoretically predicted in 2019 by Kalameitsev, Kovalev, and Savenko. These effects also represent nonlinear phenomena of generation of electric current in two-dimensional materials, such as transition metal dichalcogenide monolayers or graphene, located on a piezoelectric substrate by a propagating acoustic wave. The generated electric currents are proportional to the intensity of the acoustic wave and their directions are perpendicular to the acoustic wave vector.
See also
Physical acoustics
Semiconductors
Piezoelectricity
Elastic waves
References
Acoustics
Waves
Semiconductors | Acousto-electric effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 235 | [
"Physical phenomena",
"Matter",
"Physical quantities",
"Semiconductors",
"Classical mechanics",
"Acoustics",
"Waves",
"Materials",
"Motion (physics)",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Electrical resistance and conductance"
] |
44,549,263 | https://en.wikipedia.org/wiki/Popescu%27s%20theorem | In commutative algebra and algebraic geometry, Popescu's theorem, introduced by Dorin Popescu,
states:
Let A be a Noetherian ring and B a Noetherian algebra over it. Then, the structure map A → B is a regular homomorphism if and only if B is a direct limit of smooth A-algebras.
For example, if A is a local G-ring (e.g., a local excellent ring) and B its completion, then the map A → B is regular by definition and the theorem applies.
Another proof of Popescu's theorem was given by Tetsushi Ogoma, while an exposition of the result was provided by Richard Swan.
The usual proof of the Artin approximation theorem relies crucially on Popescu's theorem. Popescu's result was proved by an alternate method, and somewhat strengthened, by Mark Spivakovsky.
See also
Ring with the approximation property
References
External links
Theorems in algebraic geometry | Popescu's theorem | [
"Mathematics"
] | 200 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
44,552,564 | https://en.wikipedia.org/wiki/Kernel%20function%20for%20solving%20integral%20equation%20of%20surface%20radiation%20exchanges | In physics and engineering, the radiative heat transfer from one surface to another is the equal to the difference of incoming and outgoing radiation from the first surface. In general, the heat transfer between surfaces is governed by temperature, surface emissivity properties and the geometry of the surfaces. The relation for heat transfer can be written as an integral equation with boundary conditions based upon surface conditions. Kernel functions can be useful in approximating and solving this integral equation.
Governing equation
The radiative heat exchange depends on the local surface temperature of the enclosure and the properties of the surfaces, but does not depend upon the media. Because media neither absorb, emit, nor scatter radiation.
Governing equation of heat transfer between two surface Ai and Aj
where
is the wavelength of radiation rays,
is the radiation intensity,
is the emissivity,
is the reflectivity,
is the angle between the normal of the surface and radiation exchange direction, and
is the azimuthal angle
If the surface of the enclosure is approximated as gray and diffuse surface, and so the above equation can be written as after the analytical procedure
where is the black body emissive power which is given as the function of temperature of the black body
where is the Stefan–Boltzmann constant.
Kernel function
Kernel functions provide a way to manipulate data as though it were projected into a higher dimensional space, by operating on it in its original space. So that data in higher-dimensional space become more easily separable. Kernel function is also used in integral equation for surface radiation exchanges. Kernel function relates to both the geometry of the enclosure and its surface properties. Kernel function depends on geometry of the body.
In above equation K(r,r′) is the kernel function for the integral, which for 3-D problems takes the following form
where F assumes a value of one when the surface element I sees the surface element J, otherwise it is zero if the ray is blocked and θr is the angle at point r, and θr′ at point r′. The parameter F depends on the geometric configuration of the body, so the kernel function highly irregular for a geometrically complex enclosure.
Kernel equation for 2-D and axisymmetric geometry
For 2-D and axisymmetric configurations, the kernel function can be analytically integrated along the z or θ direction. The integration of the kernel function is
Here n denotes the unit normal of element I at the azimuth angle ϕ′ being zero, and n′ refers to the unit normal of element J with any azimuth angle ϕ′. The mathematical expressions for n and n′ are as follows:
Substituting these terms into equation, the kernel function is rearranged in terms of the azimuth angle ϕ'-
where
Relation
holds for this particular case.
The final expression for the kernel function is
where
References
Robert Siegel, Thermal Radiation Heat Transfer, Fourth Edition
Ben Q. Li, "Discontinuous finite element in fluid dynamics and heat transfer"
J. R. Mahan Radiation Heat Transfer: A Statistical Approach, Volume 1
Richard M. Goody Yuk Ling Yung Atmospheric Radiation
K. G. Terry Hollands "The Simplified-Fredholm Integral Equation Solver and Its Use in Thermal Radiation"
Michael F. Modest Radiative Heat Transfer
External links
http://crsouza.blogspot.in/2010/03/kernel-functions-for-machine-learning.html
http://mathworld.wolfram.com/IntegralKernel.html
http://www.thermalfluidscentral.org/e-books/book-viewer.php?b=37&s=11
Heat transfer
Integral equations | Kernel function for solving integral equation of surface radiation exchanges | [
"Physics",
"Chemistry",
"Mathematics"
] | 745 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Integral equations",
"Mathematical objects",
"Equations",
"Thermodynamics"
] |
44,554,214 | https://en.wikipedia.org/wiki/Univalent%20foundations | Univalent foundations are an approach to the foundations of mathematics in which mathematical structures are built out of objects called types. Types in univalent foundations do not correspond exactly to anything in set-theoretic foundations, but they may be thought of as spaces, with equal types corresponding to homotopy equivalent spaces and with equal elements of a type corresponding to points of a space connected by a path. Univalent foundations are inspired both by the old Platonic ideas of Hermann Grassmann and Georg Cantor and by "categorical" mathematics in the style of Alexander Grothendieck. Univalent foundations depart from (although are also compatible with) the use of classical predicate logic as the underlying formal deduction system, replacing it, at the moment, with a version of Martin-Löf type theory. The development of univalent foundations is closely related to the development of homotopy type theory.
Univalent foundations are compatible with structuralism, if an appropriate (i.e., categorical) notion of mathematical structure is adopted.
History
The main ideas of univalent foundations were formulated by Vladimir Voevodsky during the years 2006 to 2009. The sole reference for the philosophical connections between univalent foundations and earlier ideas are Voevodsky's 2014 Bernays lectures. The name "univalence" is due to Voevodsky. A more detailed discussion of the history of some of the ideas that contribute to the current state of univalent foundations can be found at the page on homotopy type theory (HoTT).
A fundamental characteristic of univalent foundations is that they—when combined with the Martin-Löf type theory (MLTT)—provide a practical system for formalization of modern mathematics. A considerable amount of mathematics has been formalized using this system and modern proof assistants such as Coq and Agda. The first such library called "Foundations" was created by Vladimir Voevodsky in 2010. Now Foundations is a part of a larger development with several authors called UniMath. Foundations also inspired other libraries of formalized mathematics, such as the HoTT Coq library and HoTT Agda library, that developed univalent ideas in new directions.
An important milestone for univalent foundations was the Bourbaki Seminar talk by Thierry Coquand in June 2014.
Main concepts
Univalent foundations originated from certain attempts to create foundations of mathematics based on higher category theory. The closest earlier ideas to univalent foundations were the ideas that Michael Makkai denotes 'first-order logic with dependent sorts' (FOLDS). The main distinction between univalent foundations and the foundations envisioned by Makkai is the recognition that "higher dimensional analogs of sets" correspond to infinity groupoids and that categories should be considered as higher-dimensional analogs of partially ordered sets.
Originally, univalent foundations were devised by Vladimir Voevodsky with the goal of enabling those who work in classical pure mathematics to use computers to verify their theorems and constructions. The fact that univalent foundations are inherently constructive was discovered in the process of writing the Foundations library (now part of UniMath). At present, in univalent foundations, classical mathematics is considered to be a "retract" of constructive mathematics, i.e., classical mathematics is both a subset of constructive mathematics consisting of those theorems and constructions that use the law of the excluded middle as their assumption and a "quotient" of constructive mathematics by the relation of being equivalent modulo the axiom of the excluded middle.
In the formalization system for univalent foundations that is based on Martin-Löf type theory and its descendants such as Calculus of Inductive Constructions, the higher dimensional analogs of sets are represented by types. The collection of types is stratified by the concept of h-level (or homotopy level).
Types of h-level 0 are those equal to the one point type. They are also called contractible types.
Types of h-level 1 are those in which any two elements are equal. Such types are called "propositions" in univalent foundations. The definition of propositions in terms of the h-level agrees with the definition suggested earlier by Awodey and Bauer. So, while all propositions are types, not all types are propositions. Being a proposition is a property of a type that requires proof. For example, the first fundamental construction in univalent foundations is called iscontr. It is a function from types to types. If X is a type then iscontr X is a type that has an object if and only if X is contractible. It is a theorem (which is called, in the UniMath library, isapropiscontr) that for any X the type iscontr X has h-level 1 and therefore being a contractible type is a property. This distinction between properties that are witnessed by objects of types of h-level 1 and structures that are witnessed by objects of types of higher h-levels is very important in the univalent foundations.
Types of h-level 2 are called sets. It is a theorem that the type of natural numbers has h-level 2 (isasetnat in UniMath). It is claimed by the creators of univalent foundations that the univalent formalization of sets in Martin-Löf type theory is the best currently-available environment for formal reasoning about all aspects of set-theoretical mathematics, both constructive and classical.
Categories are defined (see the RezkCompletion library in UniMath) as types of h-level 3 with an additional structure that is very similar to the structure on types of h-level 2 that defines partially ordered sets. The theory of categories in univalent foundations is somewhat different and richer than the theory of categories in the set-theoretic world with the key new distinction being that between pre-categories and categories.
An account of the main ideas of univalent foundations and their connection to constructive mathematics can be found in a tutorial by Thierry Coquand. A presentation of the main ideas from the perspective of classical mathematics can be found in the 2014 review by Alvaro Pelayo and Michael Warren, as well as in the introduction by Daniel Grayson. See also: Vladimir Voevodsky (2014).
Current developments
An account of Voevodsky's construction of a univalent model of the Martin-Löf type theory with values in Kan simplicial sets can be found in a paper by Chris Kapulkin, Peter LeFanu Lumsdaine and Vladimir Voevodsky. Univalent models with values in the categories of inverse diagrams of simplicial sets were constructed by Michael Shulman. These models have shown that the univalence axiom is independent from the excluded middle axiom for propositions.
Voevodsky's model is considered to be non-constructive since it uses the axiom of choice in an ineliminable way.
The problem of finding a constructive interpretation of the rules of the Martin-Löf type theory that in addition satisfies the univalence axiom and canonicity for natural numbers remains open. A partial solution is outlined in a paper by Marc Bezem, Thierry Coquand and Simon Huber with the key remaining issue being the computational property of the eliminator for the identity types. The ideas of this paper are now being developed in several directions including the development of the cubical type theory.
New directions
Most of the work on formalization of mathematics in the framework of univalent foundations is being done using various sub-systems and extensions of the Calculus of Inductive Constructions (CIC).
There are three standard problems whose solution, despite many attempts, could not be constructed using CIC:
To define the types of semi-simplicial types, H-types or (infty,1)-category structures on types.
To extend CIC with a universe management system that would allow implementation of the resizing rules.
To develop a constructive variant of the Univalence Axiom
These unsolved problems indicate that while CIC is a good system for the initial phase of the development of the univalent foundations, moving towards the use of computer proof assistants in the work on its more sophisticated aspects will require the development of a new generation of formal deduction and computation systems.
See also
Homotopy type theory
Notes
References
External links
Libraries of formalized mathematics
Introduction to Univalent Foundations of Mathematics with Agda
Foundations of mathematics | Univalent foundations | [
"Mathematics"
] | 1,759 | [
"Foundations of mathematics"
] |
44,555,393 | https://en.wikipedia.org/wiki/Hexadehydro%20Diels%E2%80%93Alder%20reaction | In organic chemistry, the hexadehydro-Diels–Alder (HDDA) reaction is an organic chemical reaction between a diyne (2 alkyne functional groups arranged in a conjugated system) and an alkyne to form a reactive benzyne species, via a [4+2] cycloaddition reaction. This benzyne intermediate then reacts with a suitable trapping agent to form a substituted aromatic product. This reaction is a derivative of the established Diels–Alder reaction and proceeds via a similar [4+2] cycloaddition mechanism. The HDDA reaction is particularly effective for forming heavily functionalized aromatic systems and multiple ring systems in one synthetic step.
Reaction mechanism
Depending on the substrate chosen, the HDDA reaction can be initiated thermally or by the addition of a suitable catalyst, often a transition metal. The prevailing mechanism for the thermally-initiated HDDA reaction is a [4+2] cycloaddition between a conjugated diyne (1,3-dialkyne) and an alkyne (often referred to as a diynophile in analogy to the Diels–Alder dienophile) to form an ortho-benzyne species. The metal-catalyzed HDDA is thought to proceed through a similar pathway, forming a metal-stabilized benzyne, which is then trapped.
The simplest model of an HDDA reaction is the cycloaddition of butadiyne and acetylene to form ortho-benzyne (o-benzyne, shown below). This reactive intermediate (denoted by brackets) subsequently reacts with a generalized trapping reagent that consists of a nucleophilic (Nu-) and electrophilic (El-) site, giving the benzenoid product shown.
The o-benzyne intermediate can be visualized in the two resonance (chemistry) forms illustrated above. The most commonly depicted form is the alkyne (1), but the cumulene (1’) form can be helpful in visualizing ring formation by [4+2] cycloaddition.
Thermodynamics and kinetics
The HDDA reaction is often thermodynamically favorable (exothermic), but can have a significant kinetic barrier to reaction (high activation energy). Calculations have suggested that the formation of unsubstituted o-benzyne (from butadiyne and acetylene, above) has an activation energy of 36 kcal mol−1, but is thermodynamically favorable, estimated to be exothermic by -51 kcal mol−1. As a result of higher activation energy, some HDDA reactions require heating to elevated temperatures (>100 °C) in order to initiate.
Furthermore, the benzyne trapping step is also thermodynamically favourable, calculated to be an additional -73 kcal mol−1 for trapping of an ester-substituted o-benzyne with tert-butanol.
The HDDA [4+2] cycloaddition can occur via either a concerted pathway or a stepwise reaction, diradical pathway. These two pathways can differ in activation energy depending on substrate and reaction system. Computational studies have suggested that while both pathways are comparable in activation energy for unactivated (unsubstituted) diynophiles, the stepwise pathway has a lower activation energy barrier, and so is the dominant pathway, for activated diynophiles.
Regiochemistry
The regiochemistry of non-symmetrical HDDA-derived benzyne trapping can be explained by a combination of electronic and ring distortion effects. Computationally, the more obtuse angle (a) corresponds to the more electron deficient (δ+) benzyne carbon, leading to attack of the nucleophilic component at this site. Consequently, the electrophilic component adds at the more electron rich (δ-) site (b).
Terminology
The HDDA reaction is a derivative of, and mechanistically related to, the classical Diels–Alder reaction. As described by Hoye and coworkers, the HDDA reaction can be viewed conceptually as a member of a series of pericyclic reactions with increasing unsaturation (by incremental removal of hydrogen pairs). The “hexadehydro” descriptor is derived from this interpretation, as the simplest HDDA reaction product (o-benzyne, 4 hydrogens) has 6 fewer hydrogen atoms than the simplest Diels–Alder reaction product (cyclohexene, 10 hydrogens).
Formally, the hexadehydro Diels–Alder reaction describes only the formation of the benzyne, but this species is an unstable intermediate that reacts readily with a variety of trapping partners, including reaction solvents. Thus, in practice the HDDA reaction describes a two-step cascade reaction of benzyne formation and trapping to yield the final product.
Historical development
The first examples of the HDDA reaction were reported independently in 1997 by the groups of Ueda and Johnson. Johnson and co-workers observed the cyclization of 1,3,8-nonatriyne under flash vacuum thermolysis (600 °C, 10−2 torr) to form two products, indane and the dehydrogenation product indene, in 95% combined yield. Deuterium labeling studies suggested that the product was formed by a [4+2] cycloaddition to a benzyne intermediate, followed by in-situ reduction to form the observed products. Ueda and co-workers observed that acyclic tetraynes cyclized at room temperature to form 5H-fluorenol derivatives. The formation of a benzyne intermediate was determined by trapping studies, using benzene or anthracene to trap the benzyne as a Diels–Alder adduct. Ueda and co-workers further elaborated on this method in subsequent reports, trapping the benzyne using a variety of nucleophiles (oxygen, nitrogen, and sulfur-based), as well as synthesizing larger, fused-ring aromatic systems.
While known for over a decade, the HDDA reaction did not come into wider synthetic use until 2012, when Hoye and co-workers conducted a thorough investigation into the scope and utility of this cycloaddition. That paper referred to this diyne–diynophile reaction as the“hexadehydro Diels–Alder (HDDA) reaction, and this terminology has since come into more widespread use. Since 2012, the HDDA reaction has been an area of renewed interest and has attracted further study by a number of research groups.
Reaction scope
One of the main advantages of the HDDA reaction over other methods of accessing benzynes is the simplicity of the reaction system. HDDA reaction of triynes or tetraynes forms benzynes without the direct formation of by-products. In comparison, the formation of benzyne through removal of ortho-substituents on arenes results in stoichiometric amounts of byproducts from those substituents. For example, formation of benzyne from 1 mole of 2-trimethylsilylphenyl trifluoromethanesulfonate (triflate) produces 1 mole of trimethylsilyl fluoride and 1 mole of triflate ion. Byproducts can compete with other reagents for benzyne trapping, cause side-reactions, and may require additional purification.
Additionally, the HDDA reaction can be useful for substrates with sensitive functionality that might not be tolerated by other benzyne formation conditions (e.g. strong base). The thermally-initiated HDDA reaction has been shown to tolerate esters, ketones, protected amides, ethers, protected amines, aryl halides, alkyl halides, alkenes, and cyclopropanes.
Green chemistry
The HDDA reaction can fulfill several principles of green chemistry.
Atom economy – All of the atoms in the HDDA substrate remain in the product after the reaction and atoms of the trapping reagent are incorporated into the product.
Reduced waste – Formation of the benzyne species produces no stoichiometric byproducts. Products are often formed in high yield with few side-products.
Catalysis – HDDA reaction occurs thermally or with a sub-stoichiometric amount of catalyst.
Synthetic applications
Intramolecular trapping
The HDDA reaction can be used to synthesize multi-cyclic ring systems from linear precursors containing the diyne, diynophile, and the trapping group. For example, Hoye and co-workers were able to synthesize fused, tricyclic ring systems from linear triyne precursors in one step and high yields via a thermally-initiated, intramolecular HDDA reaction. Furthermore, both nitrogen- and oxygen-containing heterocycles could be incorporated by use of an appropriate precursor. In this case, the pendant ilyl ether provided the trapping group, through a retro-Brook rearrangement.
Intermolecular trapping
HDDA-generated benzynes can also be trapped intermolecularly by a variety of trapping reagents. Careful choice of trapping reagent can add further functionality, including aryl halides, aryl heteroatoms (phenols and aniline derivatives), and multiple ring systems.
Ene reactions
The HDDA reaction can be used in a cascade reaction sequence with ene reactions, such as the Alder ene reaction and the aromatic ene reaction. The HDDA-generated benzyne can be trapped with a suitable ene donor that is covalently tethered to the benzyne. The benzyne serves as the enophile, while the ene can be an alkene (Alder ene) or an aromatic ring (aromatic ene). Lee and co-workers have shown an HDDA-Alder ene cascade reaction that can produce a variety of products, including medium-sized fused rings, spirocycles, and allenes.
Hoye and co-workers demonstrated a thermally-initiated triple HDDA-aromatic ene-Alder ene cascade that leads to heavily functionalized products in one-step with no additional reagents or by-products.
Dehydrogenation
HDDA-derived benzynes have also been shown to dehydrogenate saturated alkanes to form alkenes. In the absence of external trapping reagents, the benzyne intermediate can abstract vicinal (chemistry) hydrogen atoms from a suitable donor, often the reaction solvent (such as tetrahydrofuran or cyclooctane). This desaturates the donor alkane, forming an alkene, and traps the benzyne to a dihydrobenzenoid product. Isotopic labelling and computational studies suggest that the double hydrogen transfer mechanism occurs by a concerted pathway and that the rate of reaction is highly dependent on the conformation of the alkane donor. This reaction can be used to access 1,2,3,4-tetrasubstituted aromatic rings, a substitution pattern that can be difficult to access through other synthetic methodology.
C-H activation
The HDDA reaction can also be used as a method of C-H activation, where a pendant alkane C-H bond traps a metal-complexed aryne intermediate. Lee and co-workers observed that transition metal catalysts induced an HDDA reaction of tetraynes that was intramolecularly trapped by a pendant, sp3 C-H bond. Primary, secondary, and tertiary C-H bonds were all reactive trapping partners, with silver salts being the most effective catalysts. Deuterium labelling experiments suggest that the (sp3) C-H bond breaking and (sp2) C-H bond forming reactions occur in a concerted fashion.
Fluorination
The silver-catalyzed HDDA reaction has also been used to synthesize organofluorine compounds by use of a fluorine-containing counterion. The metal-complexed aryne intermediate can be trapped by the counterion to produce aryl rings with fluoro, trifluoromethyl, or trifluoromethylthiol substituents. Unstable counterions, such as CF3−, can be produced in-situ.
The domino HDDA reaction
Properly designed polyyne substrate has been shown to undergo efficient cascade net [4+2] cycloadditions merely upon being heated. This domino hexadehydro Diels–Alder reaction is initiated by a rate-limiting benzyne formation. Proceeding through naphthyne, anthracyne, and/or tetracyne intermediates, rapid bottom-up synthesis of highly fused, polycyclic aromatic compounds results.
The aza HDDA reaction
Nitriles can also participate in the HDDA reactions to generate pyridyne intermediates. In situ capturing of pyridynes gives rise to highly substituted and functionalized pyridine derivatives, which is complementary to other classical approaches for construction of this important class of heterocycles.
Radial HDDA reactions
Designer multi-ynes arrayed upon a common, central template undergo sequential, multiple cycloisomerization reactions to produce architecturally novel polycyclic compounds in a single operation. Diverse product topologies are accessible, ranging from highly fused, polycyclic aromatic compounds (PACs) to architectures having structurally complex arms adorning central phenylene or expanded phenylene cores.
References
Cycloadditions
Name reactions | Hexadehydro Diels–Alder reaction | [
"Chemistry"
] | 2,853 | [
"Name reactions"
] |
37,408,607 | https://en.wikipedia.org/wiki/Szyszkowski%20equation | The Szyszkowski Equation has been used by Meissner and Michaels to describe the decrease in surface tension of aqueous solutions of carboxylic acids, alcohols and esters at varying mole fractions. It describes the exponential decrease of the surface tension at low concentrations reasonably but should be used only at concentrations below 1 mole%.
Equation
with:
σm is surface tension of the mixture
σw is surface tension of pure water
a is component specific constant (see table below)
x is mole fraction of the solvated component
The equation can be rearranged to be explicit in a:
This allows the direct calculation of that component specific parameter a from experimental data.
The equation can also be written as:
with:
γ is surface tension of the mixture
γ0 is surface tension of pure water
R is ideal gas constant 8.31 J/(mol*K)
T is temperature in K
ω is cross-sectional area of the surfactant molecules at the surface
The surface tension of pure water is dependent on temperature. At room temperature (298 K), it is equal to 71.97 mN/m
Parameters
Meissner and Michaels published the following a constants:
Example
The following table and diagram show experimentally determined surface tensions in the mixture of water and propionic acid.
This example shows a good agreement between the published value a=2.6*10−3 and the calculated value a=2.59*10−3 at the smallest given mole fraction of 0.00861 but at higher concentrations of propionic acid the value of an increases considerably, showing deviations from the predicted value.
See also
Bohdan Szyszkowski
References
Fluid mechanics
Surface science
Thermodynamic equations | Szyszkowski equation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 353 | [
"Thermodynamic equations",
"Equations of physics",
"Surface science",
"Civil engineering",
"Condensed matter physics",
"Thermodynamics",
"Fluid mechanics"
] |
37,408,636 | https://en.wikipedia.org/wiki/Ova%20bank | An ova bank, or cryobank, or egg cell bank is a facility that collects and stores human ova, mainly from ova donors, primarily for the purpose of achieving pregnancies of either the donor, at a later time (i.e. to overcome issues of infertility), or through third party reproduction, notably by artificial insemination. Ova donated in this way are known as donor ova.
General
There are currently very few ova banks in existence.
Generally, the main purpose of storing ova, at present, is to overcome infertility which may arise at a later age, or due to a disease. The ova are generally collected between 31 and 35 years of age.
The procedure of collecting ova may or may not include ovarian hyperstimulation.
It can be expected however that ova collection will become more important in the future, i.e. for third party reproduction, and/or for producing stem cells, i.e. from unfertilized eggs (oocytes).
See also
Sperm bank
Gene bank
Artificial insemination
Genetic counseling
Genetic testing
New eugenics
Safe upper age limit for women donating ova
Eugenics
Infertility
Surrogacy
Commercial surrogacy
Assisted reproduction
Designer babies
References
Fertility medicine
Cryobiology
Biorepositories
Egg donation | Ova bank | [
"Physics",
"Chemistry",
"Biology"
] | 277 | [
"Physical phenomena",
"Phase transitions",
"Cryobiology",
"Bioinformatics",
"Biochemistry",
"Biorepositories"
] |
37,411,947 | https://en.wikipedia.org/wiki/Conveyor%20pulley | A conveyor pulley is a mechanical device used to change the direction of the belt in a conveyor system, to drive the belt, and to tension the belt. Modern pulleys are made of rolled shells with flexible end disks and locking assemblies. Early pulley engineering was developed in Australia by Josef Sitzwohl in 1948 and later by Helmuth Lange and Walter Schmoltzi in Germany.
Components
Pulleys are made up of several components including the shell, end disk, hub, shaft and locking assembly. The end disk and hub may be one piece. The locking assembly may also be replaced with a hub and bushing on lower tension pulleys. The shell is also referred to as the rim in some parts of the world.
The pulley shaft is typically sized following CEMA B105.1 in the Americas or AS 1403 in Australia.
Design programs
Conveyor Soft, pulley design program
Helix delta-d Conveyor Pulley to AS1403 Software
Pulley Maven, software for designing and analyzing conveyor pulleys
Manufacturers
Pulley manufacturers in East Germany (DDR) in 1962 included Zemag, Lauchhammer, and Köthen.
See also
Flexible shaft
References
Belt drives
Simple machines
Mechanical power transmission
Bulk material handling
Material-handling equipment | Conveyor pulley | [
"Physics",
"Technology"
] | 264 | [
"Machines",
"Physical systems",
"Mechanics",
"Simple machines",
"Mechanical power transmission"
] |
37,412,229 | https://en.wikipedia.org/wiki/VTPR | VTPR (short for Volume-Translated Peng–Robinson)
is an estimation method for the calculation of phase equilibria of mixtures of chemical components. The original goal for the development of this method was to enable the estimation of properties of mixtures which contain supercritical components. These class of substances couldn't be predicted with established models like UNIFAC.
Principle
VTPR is a group contribution equation of state. This is class of prediction methods combine equations of state (mostly cubic) with activity coefficient models based on group contributions like UNIFAC. The activity coefficient model is used to adapt the equation of state parameters for mixtures by a so-called mixing rule.
The usage of an equation of state introduces all thermodynamic relations defined for equations of state into the VTPR model. This allows the calculation of densities, enthalpies, heat capacities, and more.
Equations
VTPR is based on a combination of the Peng–Robinson equation of state with a mixing rule whose parameters are determined by UNIFAC.
Equation of state
The Peng–Robinson equation of state is defined as follows:
The originally used α-function has been replaced by the function of Twu, Bluck, Cunningham and Coon
.
The parameters of the Twu equation are fitted to experimental vapor pressure data of pure components and guarantee therefore a better description of the vapor pressure than the original relation.
Mixing rule
The VTPR mixing rule calculate the parameter a and b of the equation of state by
with
and
by the parameters ai und bi of the pure substances, their mole fractions xi and the residual part of excess Gibbs energy gE. The excess Gibbs energy is calculated by a modified UNIFAC model.
Model parameters
For the equation of state VTPR needs the critical temperature and pressure and additionally at least the acentric factor for all pure components in the considered mixture.
A better quality can be achieved if the acentric factor is replaced by Twu constants which have been fitted to experimal vapor pressure data of pure components.
The mixing rule uses UNIFAC which needs a variety of UNIFAC-specific parameters. Beside some model constants the most important are group interaction parameters which are fitted to experimental vapor–liquid equilibria of mixtures.
Hence, for high-quality model parameters experimental data (pure component vapor pressures and vapor–liquid equilibrium and liquid–liquid equilibrium data, activity coefficients of mixtures, heats of mixing) are needed. These are normally provided by factual data banks like the Dortmund Data Bank which has been the base for the VTPR development.
Volume translation
VTPR implements a correction to the pure component densities resp. volume. This volume translation corrects systematic deviations of the Peng–Robinson equation of state (PR EOS).
The translation constant is obtained by determining the difference between the calculated density at Tr=0.7 and the real value of the density obtained from experimental data. Tr is close to the normal boiling point for many substances. The volume translation constant ci
is therefore component specific.
This volume/density translation is then applied to the complete density/volume curve calculated by the PR EOS. This is sufficient because the calculated curve has the right slope and is only shifted.
The Peng–Robinson equation of state is then
Modifications to the UNIFAC model
UNIFAC use two separate parts to calculate the activity coefficients, a combinatorial part and a residual part. The combinatorial part is calculated only from group specific constants and is omitted in the VTPR model. VTPR use only the residual part calculated from interaction parameters between groups.
This has the side effect that ri values (van der Waals volumes) are not needed and only the van der Waals surfaces
qi are used.
In addition, the qi values are not constant properties of the groups, instead they are adjustable parameters and fitted to experimental data together with the interaction parameters between groups.
Example calculation
The prediction of a vapor–liquid equilibrium is successful even in mixtures containing supercritical components.
The mixture has to be subcritical though. In the given example carbon dioxide is the supercritical component with Tc=304.19 K and Pc=7475 kPa. The critical point of the mixture lies at T=411 K und P≈15000 kPa. The composition of the mixture is near 78 mole% carbon dioxide und 22 mole% cyclohexane.
VTPR describes this binary mixture quite well, the dew point curve as well as the bubble point curve and the critical point of the mixture.
Electrolyte systems
VTPR normally cannot handle electrolyte containing mixtures because the underlying UNIFAC doesn't support salts. It is however possible to exchange the UNIFAC activity coefficient model by a model that supports electrolytes like LIFAC.
See also
PSRK (Predictive Soave–Redlich–Kwong), VTPRs predecessor of the same group contribution equation of state type but using a different equation of state, a different α function, and a different UNIFAC modification.
Literature
External links
Thermodynamic models | VTPR | [
"Physics",
"Chemistry"
] | 1,034 | [
"Thermodynamic models",
"Thermodynamics"
] |
37,412,518 | https://en.wikipedia.org/wiki/Local%20ternary%20patterns | Local ternary patterns (LTP) are an extension of local binary patterns (LBP). Unlike LBP, it does not threshold the pixels into 0 and 1, rather it uses a threshold constant to threshold pixels into three values. Considering k as the threshold constant, c as the value of the center pixel, a neighboring pixel p, the result of threshold is:
In this way, each thresholded pixel has one of the three values. Neighboring pixels are combined after thresholding into a ternary pattern. Computing a histogram of these ternary values will result in a large range, so the ternary pattern is split into two binary patterns. Histograms are concatenated to generate a descriptor double the size of LBP.
See also
Local binary patterns
References
Computer vision | Local ternary patterns | [
"Technology",
"Engineering"
] | 163 | [
"Packaging machinery",
"Computer science stubs",
"Computer science",
"Artificial intelligence engineering",
"Computing stubs",
"Computer vision"
] |
38,847,178 | https://en.wikipedia.org/wiki/Free%20motion%20equation | A free motion equation is a differential equation that describes a mechanical system in the absence of external forces, but in the presence only of an inertial force depending on the choice of a reference frame.
In non-autonomous mechanics on a configuration space , a free motion equation is defined as a second order non-autonomous dynamic equation on which is brought into the form
with respect to some reference frame on . Given an arbitrary reference frame on , a free motion equation reads
where is a connection on associates with the initial reference frame . The right-hand side of this equation is treated as an inertial force.
A free motion equation need not exist in general. It can be defined if and only if a configuration bundle
of a mechanical system is a toroidal cylinder .
See also
Non-autonomous mechanics
Non-autonomous system (mathematics)
Analytical mechanics
Fictitious force
References
De Leon, M., Rodrigues, P., Methods of Differential Geometry in Analytical Mechanics (North Holland, 1989).
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Formulation of Classical and Quantum Mechanics (World Scientific, 2010) ().
Theoretical physics
Classical mechanics
Differential equations
Dynamical systems | Free motion equation | [
"Physics",
"Mathematics"
] | 247 | [
"Theoretical physics",
"Classical mechanics stubs",
"Classical mechanics",
"Mathematical objects",
"Equations",
"Differential equations",
"Mechanics",
"Theoretical physics stubs",
"Dynamical systems"
] |
38,847,195 | https://en.wikipedia.org/wiki/TP%20model%20transformation%20in%20control%20theory | Baranyi and Yam proposed the TP model transformation as a new concept in quasi-LPV (qLPV) based control, which plays a central role in the highly desirable bridging between identification and polytopic systems theories. It is also used as a TS (Takagi-Sugeno) fuzzy model transformation. It is uniquely effective in manipulating the convex hull of polytopic forms (or TS fuzzy models), and, hence, has revealed and proved the fact that convex hull manipulation is a necessary and crucial step in achieving optimal solutions and decreasing conservativeness in modern linear matrix inequality based control theory. Thus, although it is a transformation in a mathematical sense, it has established a conceptually new direction in control theory and has laid the ground for further new approaches towards optimality.
For details please visit: TP model transformation.
TP-tool MATLAB toolbox
A free MATLAB implementation of the TP model transformation can be downloaded at or an old version of the toolbox is available at MATLAB Central . Be careful, in the MATLAB toolbox the assignments of the dimensions of the core tensor is in the opposite way in contrast to the notation used in the related literature. In some variants of the ToolBox, the first two dimension of the core tensor is assigned to the vertex systems. In the TP model literature the last two. A simple example is given below.
clear
M1=20; % Grid density
M2=20;
omega1=[-1,1]; %Interval
omega2=[-1,1];
domain=[omega1; omega2];
for m1=1:M1
for m2=1:M2
p1=omega1(1)+(omega1(2)-omega1(1))/M1*(m1-1); %sampling grid
p2=omega2(1)+(omega2(2)-omega2(1))/M2*(m2-1);
SD(m1,m2,1,:)=[1 0]; % SD is the discretized system matrix
SD(m1,m2,2,:)=[(-1-0.67*p1*p1) (1.726*p2*p2)];
end
end
[S,U, sv]=hosvd(SD,[1,1,0,0],1e-12); % Finding the TP structure
UA{1}=U{1}; % This is the HOSVD based canonical form
UA{2}=U{2};
ns1 = input('Results of SNNN TS fuzzy model');
UC=genhull(UA,'snnn'); % snnn weightinf functions
UCP{1}=pinv(UC{1});
UCP{2}=pinv(UC{2});
SC=tprods(SD,UCP); %This is to find the core tensor
H(:,:)=SC(1,1,:,:) %This is to show the vertices of the TP model
H(:,:)=SC(1,2,:,:)
H(:,:)=SC(2,1,:,:)
H(:,:)=SC(2,2,:,:)
figure(1)
hold all
plothull(U{1}, omega1) %Draw the waiting functions of p1
title('Weighting functions for p_{1}');
xlabel('p_{1}')
ylabel('Weighting functions')
grid on
box on
figure(2)
hold all
plothull(UC{2}, omega2) %Show the waiting functions of p2
title('Weighting functions for p_{2}');
xlabel('p_{2}')
ylabel('Weighting functions')
grid on
box on
ns2 = input('Results of CNO TS fuzzy model');
UC=genhull(UA,'cno'); %Create CNO type waiting functions
UCP{1}=pinv(UC{1});
UCP{2}=pinv(UC{2});
SC=tprods(SD,UCP); %Find the cortensor
H(:,:)=SC(1,1,:,:) %Show the vertices of the TP model
H(:,:)=SC(1,2,:,:)
H(:,:)=SC(2,1,:,:)
H(:,:)=SC(2,2,:,:)
figure(1)
hold all
plothull(U{1}, omega1) %Show the waiting functions of p1
title('Weighting functions for p_{1}');
xlabel('p_{1}')
ylabel('Weighting functions')
grid on
box on
figure(2)
hold all
plothull(UC{2}, omega2) %Show the waiting functions of p2
title('Weighting functions for p_{2}');
xlabel('p_{2}')
ylabel('Weighting functions')
Once you have the feedback vertexes derived to each vertexes of the TP model then you may want to calculate the controller over the same polytope (see PDC design by Tanaka)
W = queryw1(UC,domain,p); % computing the weighting values over the parameter vector
F = tprods(K,W); % calculating the parameter dependent feedback F(p)
F = shiftdim(F)
U=-F*x % calculate the control value.
Key features for control analysis and design
The TP model transformation transforms a given qLPV model into a (tensor product type) polytopic form, irrespective of whether the model is given in the form of analytical equations resulting from physical considerations, or as an outcome of soft computing based identification techniques (such as neural networks or fuzzy logic based methods, or as a result of a black-box identification).
Further the TP model transformation is capable of manipulating the convex hull defined by the polytopic form that is a necessary step in polytopic qLPV model-based control analysis and design theories.
Related definitions
Linear Parameter-Varying (LPV) state-space model
with input , output and state
vector . The system matrix is a parameter-varying object, where is a time varying -dimensional parameter vector which is an element of
closed hypercube . As a matter of fact, further parameter dependent channels can be inserted to that represent various control performance requirements.
quasi Linear Parameter-Varying (qLPV) state-space model
in the above LPV model can also include some elements of the state vector
, and, hence this model belongs to the class of non-linear systems, and is also referred to as a quasi LPV (qLPV) model.
TP type polytopic Linear Parameter-Varying (LPV) state-space model
with input , output and state
vector . The system matrix is a parameter-varying object, where is a time varying -dimensional parameter vector which is an element of
closed hypercube , and the weighting functions are the elements of vector . Core tensor contains elements which are the vertexes of the system.
As a matter of fact, further parameter dependent channels can be inserted to that represent various control performance requirements.
Here
and
This means that is within the vertexes of the system (within the convex hull defined by the vertexes) for all .
Note that the TP type polytopic model can always be given in the form
where the vertexes are the same as in the TP type polytopic form and the multi variable weighting functions are the product of the one variable weighting functions according to the TP type polytopic form, and r is the linear index equivalent of the multi-linear indexing .
TP model transformation for qLPV models
Assume a given qLPV model , where , whose TP polytopic structure may be unknown (e.g. it is given by neural networks). The TP model transformation determines its TP polytopic structure as
,
namely it generates core tensor and weighting functions of for all . Its free MATLAB implementation is downloadable at or at MATLAB Central .
If the given model does not have (finite element) TP polytopic structure, then the TP model transformation determines its approximation:
where trade-off is offered by the TP model transformation between complexity (number of vertexes stored in the core tensor or the number of weighting functions) and the approximation accuracy. The TP model can be generated according to various constrains. Typical TP models generated by the TP model transformation are:
HOSVD canonical form of qLPV models,
Various kinds of TP type polytopic form (this feature is very important in control performance optimization).
TP model based control design
Key methodology
Since the TP type polytopic model is a subset of the polytopic model representations, the analysis and design methodologies developed for polytopic representations are applicable for the TP type polytopic models as well.
One typical way is to search the nonlinear controller in the form:
where the vertexes of the controller is calculated from . Typically, the vertexes are substituted into Linear Matrix Inequalities in order to determine .
In TP type polytopic form the controller is:
where the vertexes stored in the core tensor are determined from the vertexes stored in . Note that the polytopic observer or other components can be generated in similar way, such as these vertexes are also generated from .
Convex hull manipulation based optimization
The polytopic representation of a given qLPV model is not invariant. I.e. a given has number of different representation as:
where . In order to generate an optimal control of the given model we apply, for instance LMIs. Thus, if we apply the selected LMIs to the above polytopic model we arrive at:
Since the LMIs realize a non-linear mapping between the vertexes in and we may find very different controllers for each . This means that we have different number of "optimal" controllers to the same system . Thus, the question is: which one of the "optimal" controllers is really the optimal one. The TP model transformation let us to manipulate the weighting functions systematically that is equivalent to the manipulation of the vertexes. The geometrical meaning of this manipulation is the manipulation of the convex hull defined by the vertexes. We can easily demonstrate the following facts:
Tightening the convex hull typically decreases the conservativeness of the solution, so as may lead to better control performance. For instance, if we have a polytopic representation
of a given model , then we can generate a controller as
then we solved the control problem of all systems that can be given by the same vertexes, but with different weighting functions as:
where
If one of these systems are very hardly controllable (or even uncontrollable) then we arrive at a very conservative solution (or unfeasible LMIs). Therefore, we expect that during tightening the convex hull we exclude such problematic systems.
It can also be easily demonstrated that the observer design is typically needs large convex hull. So, as when we design controller and observer we need to find the optimal convex hull between the tight one and the large one. Same papers also demonstrate that using different convex hulls (if the separation principal is applicable) for observer and controller may lead to even better solution.
Properties of the TP model transformation in qLPV theories
It can be executed uniformly (irrespective of whether the model is given in the form of analytical equations) resulting from physical considerations, or as an outcome of soft computing based identification techniques (such as neural networks or fuzzy logic based methods, or as a result of a black-box identification), without analytical interaction, within a reasonable amount of time. Thus, the transformation replaces the analytical and in many cases complex and not obvious conversions to numerical, tractable, straightforward operations that can be carried out in a routine fashion.
It generates the HOSVD-based canonical form of qLPV models, which is a unique representation. This form extracts the unique structure of a given qLPV model in the same sense as the HOSVD does for tensors and matrices, in a way such that:
the number of LTI components are minimized;
the weighting functions are one variable functions of the parameter vector in an orthonormed system for each parameter (singular functions);
the LTI components (vertex components) are also in orthogonal positions;
the LTI systems and the weighting functions are ordered according to the higher-order singular values of the parameter vector;
it has a unique form (except for some special cases);
introduces and defines the rank of the qLPV model by the dimensions of the parameter vector;
The core step of the TP model transformation was extended to generate different types of convex polytopic models, in order to focus on the systematic (numerical and automatic) modification of the convex hull instead of developing new LMI equations for feasible controller design (this is the widely adopted approach). It is worth noting that both the TP model transformation and the LMI-based control design methods are numerically executable one after the other, and this makes the resolution of a wide class of problems possible in a straightforward and tractable, numerical way.
Based on the higher-order singular values (which express the rank properties of the given qLPV model, see above, for each element of the parameter vector in norm), the TP model transformation offers a trade-off between the complexity of the TP model (polytopic form), hence, the LMI design and the accuracy of the resulting TP model.
The TP model transformation is executed before utilizing the LMI design. This means that when we start the LMI design we already have the global weighting functions and during control we do not need to determine a local weighting of the LTI systems for feedback gains to compute the control value at every point of the hyperspace the system should go through. Having predefined continuous weighting functions also ensures that there is no friction in the weighting during control.
References
Control theory | TP model transformation in control theory | [
"Mathematics"
] | 3,023 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
38,848,871 | https://en.wikipedia.org/wiki/Nakajima%E2%80%93Zwanzig%20equation | The Nakajima–Zwanzig equation (named after the physicists who developed it, Sadao Nakajima and Robert Zwanzig) is an integral equation describing the time evolution of the "relevant" part of a quantum-mechanical system. It is formulated in the density matrix formalism and can be regarded as a generalization of the master equation.
The equation belongs to the Mori-Zwanzig formalism within the statistical mechanics of irreversible processes (named after Hazime Mori). By means of a projection operator, the dynamics is split into a slow, collective part (relevant part) and a rapidly fluctuating irrelevant part. The goal is to develop dynamical equations for the collective part.
The Nakajima-Zwanzig (NZ) generalized master equation is a formally exact approach for simulating quantum dynamics in condensed phases. This framework is particularly designed to address the dynamics of a reduced system interact with a larger environment, often represented as a system coupled to a bath. Within the NZ framework, one can choose between time convolution (TC) and time convolution less (TCL) forms of the quantum master equations.
The TC approach involves memory effects, where the future state of the system depends on its entire history (Non-Markovian dynamics). The TCL approach formulates the dynamics where the system's rate of change at any moment depends only on its current state, simplifying calculations by neglecting memory effects (Markovian dynamics).
Derivation
The total Hamiltonian of a system interacting with its environment (or bath) is typically expressed in system-bath form,
where is the system Hamiltonian, is the bath Hamiltonian, and describes the coupling between them.
The starting point is the quantum mechanical version of the von Neumann equation, also known as the Liouville equation:
where the Liouville operator is defined as .
In the Nakajima-Zwanzig formulation, a key step involves defining a projection operator that projects the total density operator onto the subspace of the system of interest. The complementary operator projects onto the orthogonal subspace, effectively separating the system from the bath.
The Liouville – von Neumann equation can thus be represented as
The dynamics of the projected state , under any idempotent projection operator (where), is described by the NZ generalized master equation (GQME). This equation can be used to obtain a closed equation of motion for the reduced system dynamics, focusing solely on the dynamics within the subsystem of interest.
In practice, the specific form of the projection operator can be chosen based on the problem at hand. One common choice involves defining using a reference nuclear density operator such that .
This ensures that remains idempotent. Using this projection, tracing over the nuclear Hilbert space leads to a generalized quantum master equation that describes the reduced electronic density operator which accounts for both Markovian dynamics generated by the Hamiltonian and non-Markovian dynamics due to coupling between electronic and nuclear degrees of freedom.
This
describes the dynamics driven by the Hamiltonian, which are Hamiltonian and Markovian in nature, while the other two terms on the right-hand side represent the non-Hamiltonian and non-Markovian dynamics that arise from the interactions between the electronic and nuclear degrees of freedom.
The memory kernel captures the effects of the bath on the system over the time interval from (0, t), reflecting non-Markovian dynamics where the system's history influences its future evolution.
The inhomogeneous term represents the influence of the initial state of the bath on the system at time t, which is crucial for accurately describing the system dynamics from an initial condition.
The memory kernel is crucial for simulating the dynamics of the electronic degrees of freedom. However, calculating presents difficulties due to its time-dependent nature. Additionally, the time dependency of is complex because it is governed by the projection-dependent propagator, . Therefore, the exact memory kernel is difficult to calculate except for several analytically solvable models proposed by Shi-Geva to remove the projection operator .
See also
Redfield equation
Notes
References
E. Fick, G. Sauermann: The Quantum Statistics of Dynamic Processes Springer-Verlag, 1983, .
Heinz-Peter Breuer, Francesco Petruccione: Theory of Open Quantum Systems. Oxford, 2002
Hermann Grabert Projection operator techniques in nonequilibrium statistical mechanics, Springer Tracts in Modern Physics, Band 95, 1982
R. Kühne, P. Reineker: Nakajima-Zwanzig's generalized master equation: Evaluation of the kernel of the integro-differential equation, Zeitschrift für Physik B (Condensed Matter), Band 31, 1978, S. 105–110,
Xu, M.; Yan, Y.; Liu, Y.; Shi, Q. Convergence of High Order Memory Kernels in the Nakajima-Zwanzig Generalized Master Equation and Rate Constants: Case Study of the Spin-Boson Model. Journal of Chemical Physics 2018, 148 (16). https://doi.org/10.1063/1.5022761.
Mulvihill, E.; Geva, E. A Road Map to Various Pathways for Calculating the Memory Kernel of the Generalized Quantum Master Equation. Journal of Physical Chemistry B 2021, 125 (34), 9834–9852. https://doi.org/10.1021/acs.jpcb.1c05719.
Mulvihill, E.; Schubert, A.; Sun, X.; Dunietz, B. D.; Geva, E. A Modified Approach for Simulating Electronically Nonadiabatic Dynamics via the Generalized Quantum Master Equation. Journal of Chemical Physics 2019, 150 (3). https://doi.org/10.1063/1.5055756.
External links
Quantum mechanics
Statistical mechanics | Nakajima–Zwanzig equation | [
"Physics"
] | 1,223 | [
"Statistical mechanics",
"Theoretical physics",
"Quantum mechanics"
] |
38,849,672 | https://en.wikipedia.org/wiki/Cardiophysics | Cardiophysics is an interdisciplinary science that stands at the junction of cardiology and medical physics, with researchers using the methods of, and theories from, physics to study cardiovascular system at different levels of its organisation, from the molecular scale to whole organisms. Being formed historically as part of systems biology, cardiophysics designed to reveal connections between the physical mechanisms, underlying the organization of the cardiovascular system, and biological features of its functioning.
Zbigniew R. Struzik seems to be a first author who used the term in a scientific publication in 2004.
One can use interchangeably also the terms cardiovascular physics.
See also
Medical physics
Important publications in medical physics
Biomedicine
Biomedical engineering
Physiome
Nanomedicine
References
Books
Papers
External links
Bioelectric Information Processing Laboratory of the Institute for Information Transmission Problems RAS.
The Group of Experimental and Clinical Cardiology in the Laboratory of Physiology of emotion, Research Institute of normal physiology by Anokhin RAMS
Oxford Cardiac Electrophysiology Group, led many years already by Prof. Denis Noble
Cardiac Biophysics and Systems Biology group of National Heart & Lung Institute of Imperial College London
Group of Nonlinear Dynamics & Cardiovascular Physics of the 1st Faculty of Mathematics and Natural Sciences in the Institute of Physics of Humboldt University of Berlin
Medical physics
Applied and interdisciplinary physics | Cardiophysics | [
"Physics",
"Biology"
] | 261 | [
"Medical physics",
"Applied and interdisciplinary physics",
"Biophysics"
] |
38,853,392 | https://en.wikipedia.org/wiki/Urbanization%20in%20the%20United%20States | The urbanization of the United States has progressed throughout its entire history. Over the last two centuries, the United States of America has been transformed from a predominantly rural, agricultural nation into an urbanized, industrial one. This was largely due to the Industrial Revolution in the United States (and parts of Western Europe) in the late 18th and early 19th centuries and the rapid industrialization which the United States experienced as a result. In 1790, only about one out of every twenty Americans (on average) lived in urban areas (cities), but this ratio had dramatically changed to one out of four by 1870, one out of two by 1920, two out of three in the 1960s, and four out of five in the 2000s.
Urbanization
The urbanization of the United States occurred over a period of many years, with the nation only attaining urban-majority status between 1910 and 1920. Currently, over four-fifths of the U.S. population resides in urban areas. The United States Census Bureau changed its classification and definition of urban areas in 1950 and again in 1990, and caution is thus advised when comparing urban data from different time periods.
Urbanization was fastest in the Northeastern United States, which acquired an urban majority by 1880. Some Northeastern U.S. states had already acquired an urban majority before then, including Massachusetts and Rhode Island (majority-urban by 1850), and New York (majority-urban since about 1870). The Midwestern and Western United States became urban majority in the 1910s, while the Southern United States only became urban-majority after World War II, in the 1950s.
The Western U.S. is the most urbanized part of the country today, followed closely by the Northeastern United States. The Southern U.S. experienced rapid industrialization after World War II, and is now over three-quarters urban, having almost the same urban percentage in 2010 as the Midwestern United States. Just four U.S. states (out of fifty) have a rural majority today, and even some of these states (such as Mississippi) are continuing to urbanize. Some U.S. states currently have an urban percentage around or above 90%, an urbanization rate almost unheard of a century ago.
The states of Maine and Vermont have bucked the trend towards greater urbanization which is exhibited throughout the rest of the United States. Maine's highest urban percentage ever was less than 52% (in 1950), and today less than 39% of the state's population resides in urban areas. Vermont is currently the least urban U.S. state; its urban percentage (35.1%) is less than half of the United States average (81%). Maine and Vermont were less urban than the United States average in every U.S. census since the first one in 1790.
Historical statistics
The data in this table/section are all from the U.S. Census Bureau. Note that the definition of urban population has changed over time. New definitions were used for the Censuses conducted for 1900, 1950, 2000, and 2020.
aThis datum is from 1899 instead of from 1900.
See also
Borchert's Epochs
Demographics of the United States
Largest cities in the United States by population by decade
List of United States cities
List of United States cities by area
List of United States cities by population
:Category:Timelines of cities in the United States
References
Society of the United States
Economy of the United States
Geography of the United States
Urban planning
Urban planning in the United States
United States | Urbanization in the United States | [
"Engineering"
] | 714 | [
"Urban planning",
"Architecture"
] |
38,857,273 | https://en.wikipedia.org/wiki/PhIP-Seq | Phage immunoprecipitation sequencing (PhIP-Seq) is method that combines barcoded DNA high-throughput sequencing and proteomics to determine the levels of binding of antibodies to epitopes. It has been used to study the autoantibody repertoire of autoimmune diseases like multiple sclerosis, type 2 diabetes and rheumatoid arthritis.
References
DNA sequencing | PhIP-Seq | [
"Chemistry",
"Biology"
] | 88 | [
"Molecular biology techniques",
"DNA sequencing"
] |
24,719,493 | https://en.wikipedia.org/wiki/TA%20cloning | TA cloning (also known as rapid cloning or T cloning) is a subcloning technique that avoids the use of restriction enzymes and is easier and quicker than traditional subcloning. The technique relies on the ability of adenine (A) and thymine (T) (complementary basepairs) on different DNA fragments to hybridize and, in the presence of ligase, become ligated together. PCR products are usually amplified using Taq DNA polymerase which preferentially adds an adenine to the 3' end of the product. Such PCR amplified inserts are cloned into linearized vectors that have complementary 3' thymine overhangs.
Procedure
Creating the insert
The insert is created by PCR using Taq polymerase. This polymerase lacks 3' to 5' proofreading activity and, with a high probability, adds a single, 3'-adenine overhang to each end of the PCR product. It is best if the PCR primers have guanines at the 5' end as this maximizes probability of Taq DNA polymerase adding the terminal adenosine overhang. Thermostable polymerases containing extensive 3´ to 5´ exonuclease activity should not be used as they do not leave the 3´ adenine-overhangs.
Creating the vector
The target vector is linearized and cut with a blunt-end restriction enzyme. This vector is then tailed with dideoxythymidine triphosphate (ddTTP) using terminal transferase. It is important to use ddTTP to ensure the addition of only one T residue. This tailing leaves the vector with a single 3'-overhanging thymine residue on each blunt end. Manufacturers commonly sell TA Cloning "kits" with a wide range of prepared vectors that have already been linearized and tagged with an overhanging thymine.
Benefits and drawbacks
Given that there is no need for restriction enzymes other than for generating the linearized vector, the procedure is much simpler and faster than traditional subcloning. There is also no need to add restriction sites when designing primers and thus shorter primers can be used saving time and money. In addition, in instances where there are no viable restriction sites that can be used for traditional cloning, TA cloning is often used as an alternative. The major downside of TA cloning is that directional cloning is not possible, so the gene has a 50% chance of getting cloned in the reverse direction.
References
See also
TOPO cloning
Cloning
Molecular biology
Biotechnology
Molecular biology techniques | TA cloning | [
"Chemistry",
"Engineering",
"Biology"
] | 531 | [
"Cloning",
"Genetic engineering",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry"
] |
23,233,653 | https://en.wikipedia.org/wiki/SN%202008ha | SN 2008ha was a type Ia supernova which was first observed around November 7, 2008 in the galaxy UGC 12682, which lies in the constellation Pegasus at a distance of about from Earth.
SN 2008ha was unusual in several ways: with an absolute V band magnitude of −14.2 it is one of the faintest supernovae ever observed; its host galaxy type very rarely produces supernovae. Another unusual feature of SN 2008ha was its low expansion velocity of only ~2000 km/s at maximum brightness, which indicates a very small kinetic energy released in the explosion. For comparison, SN 2002cx expanded at a velocity of ~5000 km/s whereas typical
SN Ia expand at around ~10,000 km/s. The low expansion velocity of SN2008ha resulted in relatively small Doppler broadening of spectral emission lines and this led to higher quality data.
The supernova was studied with ultraviolet, optical, and near-infrared photometry as well as optical spectra, using the Magellan telescopes in Chile, the MMT telescope in Arizona, the Gemini and Keck telescopes in Hawaii, and NASA's Swift satellite. Spectroscopically, SN 2008ha was identified as a SN 2002cx-type, a peculiar sub-class of SN Ia. SN 2008ha had a brightness period of only 10 days, which is significantly shorter than that of other SN 2002cx-like objects (~15 days) or normal Ia supernovas (~20 days). From the peak luminosity and the brightness time it was estimated that SN 2008ha generated (3.0 ± 0.9) × 10−3 M⊙ of 56Ni, had a kinetic energy of 2 × 1048 ergs, and ejected 0.15 M⊙ of material.
Discovery
SN 2008ha was co-discovered by Caroline Moore, Jack Newton and Tim Puckett, members of the Puckett Observatory World Supernova Search. The 14-year-old Moore received considerable attention for her role in the discovery because at the time, she was the youngest person to have discovered a supernova. Moore is currently majoring in Political Science at Georgetown University. She was born and raised in Warwick, NY. To make the discovery she spent almost 7 months sifting through Puckett Observatory Supernova Search (POSS) images. Incredibly, she made her discovery using a relatively small, low-powered telescope.
In July 2009, she discovered her second supernova named SN2009he.
From a young age, Moore had practiced amateur astronomy with the assistance of her father. Early on she was very familiar with telescopes because of her scientific upbringing. She even converted her tree house into an observatory. Following her discovery, Moore was awarded $384,000 in grants from the University of California-Berkeley.
Moore has since traveled across the US to promote the field of astronomy to children and discuss her discovery at astronomy events. She also used fundraising to construct an observatory in her former elementary school.
Moore's discovery has allowed scientists to better understand the nature of the death of stars. It challenged the assumptions of the then current model for the explosive nature of supernovas. This is because 2008ha's low energy and low luminosity during the death of the star could not be explained with the current scientific model at the time due to it not being able to be distinctly classified as either a type Ia or type II Supernova. Despite being one of the weakest observed supernova on record, it was up to 1000 times brighter than the sun at one point and is approximately 70 million light years away from earth. Some scientists suggested that the supernova may have failed in exploding completely which is why the light emitted from it is so weak compared to other supernovas.
In reference to Moore, one astrophysicist remarked that "the youngest person to ever discover a supernova found one of the most peculiar and interesting supernovae ever." President of the United States Barack Obama and First Lady Michelle Obama met with Moore after her discovery.
See also
Zombie star
References
External links
Light curves and spectra on the Open Supernova Catalog
SN 2008ha on Flickr
Supernovae
Pegasus (constellation) | SN 2008ha | [
"Chemistry",
"Astronomy"
] | 864 | [
"Supernovae",
"Pegasus (constellation)",
"Astronomical events",
"Constellations",
"Explosions"
] |
23,234,121 | https://en.wikipedia.org/wiki/Constructible%20set%20%28topology%29 | In topology, constructible sets are a class of subsets of a topological space that have a relatively "simple" structure.
They are used particularly in algebraic geometry and related fields. A key result known as Chevalley's theorem
in algebraic geometry shows that the image of a constructible set is constructible for an important class of mappings
(more specifically morphisms) of algebraic varieties (or more generally schemes).
In addition, a large number of "local" geometric properties of schemes, morphisms and sheaves are (locally) constructible.
Constructible sets also feature in the definition of various types of constructible sheaves in algebraic geometry
and intersection cohomology.
Definitions
A simple definition, adequate in many situations, is that a constructible set is a finite union of locally closed sets. (A set is locally closed if it is the intersection of an open set and closed set.)
However, a modification and another slightly weaker definition are needed to have definitions that behave better with "large" spaces:
Definitions: A subset of a topological space is called retrocompact if is compact for every compact open subset . A subset of is constructible if it is a finite union of subsets of the form where both and are open and retrocompact subsets of .
A subset is locally constructible if there is a cover of consisting of open subsets with the property that each is a constructible subset of .
Equivalently the constructible subsets of a topological space are the smallest collection of subsets of that (i) contains all open retrocompact subsets and (ii) contains all complements and finite unions (and hence also finite intersections) of sets in it. In other words, constructible sets are precisely the Boolean algebra generated by retrocompact open subsets.
In a locally noetherian topological space, all subsets are retrocompact, and so for such spaces the simplified definition given first above is equivalent to the more elaborate one. Most of the commonly met schemes in algebraic geometry (including all algebraic varieties) are locally Noetherian, but there are important constructions that lead to more general schemes.
In any (not necessarily noetherian) topological space, every constructible set contains a dense open subset of its closure.
Terminology: The definition given here is the one used by the first edition of EGA and the Stacks Project. In the second edition of EGA constructible sets (according to the definition above) are called "globally constructible" while the word "constructible" is reserved for what are called locally constructible above.
Chevalley's theorem
A major reason for the importance of constructible sets in algebraic geometry is that the image of a (locally) constructible set is also (locally) constructible for a large class of maps (or "morphisms"). The key result is:
Chevalley's theorem. If is a finitely presented morphism of schemes and is a locally constructible subset, then is also locally constructible in .
In particular, the image of an algebraic variety need not be a variety, but is (under the assumptions) always a constructible set. For example, the map that sends to has image the set , which is not a variety, but is constructible.
Chevalley's theorem in the generality stated above would fail if the simplified definition of constructible sets (without restricting to retrocompact open sets in the definition) were used.
Constructible properties
A large number of "local" properties of morphisms of schemes and quasicoherent sheaves on schemes hold true over a locally constructible subset. EGA IV § 9 covers a large number of such properties. Below are some examples (where all references point to EGA IV):
If is an finitely presented morphism of schemes and is a sequence of finitely presented quasi-coherent -modules, then the set of for which is exact is locally constructible. (Proposition (9.4.4))
If is an finitely presented morphism of schemes and is a finitely presented quasi-coherent -module, then the set of for which is locally free is locally constructible. (Proposition (9.4.7))
If is an finitely presented morphism of schemes and is a locally constructible subset, then the set of for which is closed (or open) in is locally constructible. (Corollary (9.5.4))
Let be a scheme and a morphism of -schemes. Consider the set of for which the induced morphism of fibres over has some property . Then is locally constructible if is any of the following properties: surjective, proper, finite, immersion, closed immersion, open immersion, isomorphism. (Proposition (9.6.1))
Let be an finitely presented morphism of schemes and consider the set of for which the fibre has a property . Then is locally constructible if is any of the following properties: geometrically irreducible, geometrically connected, geometrically reduced. (Theorem (9.7.7))
Let be an locally finitely presented morphism of schemes and consider the set of for which the fibre has a property . Then is locally constructible if is any of the following properties: geometrically regular, geometrically normal, geometrically reduced. (Proposition (9.9.4))
One important role that these constructibility results have is that in most cases assuming the morphisms in questions are also
flat it follows that the properties in question in fact hold in an open subset. A substantial number of such results is included in EGA IV § 12.
See also
Constructible topology
Constructible sheaf
Notes
References
Allouche, Jean Paul. Note on the constructible sets of a topological space.
Borel, Armand. Linear algebraic groups.
External links
https://stacks.math.columbia.edu/tag/04ZC Topological definition of (local) constructibility
https://stacks.math.columbia.edu/tag/054H Constructibility properties of morphisms of schemes (incl. Chevalley's theorem)
Topology
Algebraic geometry | Constructible set (topology) | [
"Physics",
"Mathematics"
] | 1,276 | [
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Algebraic geometry",
"Spacetime"
] |
23,239,104 | https://en.wikipedia.org/wiki/Polymorphs%20of%20silicon%20carbide | Many compound materials exhibit polymorphism, that is they can exist in different structures called polymorphs. Silicon carbide (SiC) is unique in this regard as more than 250 polymorphs of silicon carbide had been identified by 2006, with some of them having a lattice constant as long as 301.5 nm, about one thousand times the usual SiC lattice spacings.
The polymorphs of SiC include various amorphous phases observed in thin films and fibers, as well as a large family of similar crystalline structures called polytypes. They are variations of the same chemical compound that are identical in two dimensions and differ in the third. Thus, they can be viewed as layers stacked in a certain sequence. The atoms of those layers can be arranged in three configurations, A, B or C, to achieve closest packing. The stacking sequence of those configurations defines the crystal structure, where the unit cell is the shortest periodically repeated sequence of the stacking sequence. This description is not unique to SiC, but also applies to other binary tetrahedral materials, such as zinc oxide and cadmium sulfide.
Categorizing the polytypes
A shorthand has been developed to catalogue the vast number of possible polytype crystal structures: Let us define three SiC bilayer structures (that is 3 atoms with two bonds in between in the illustrations below) and label them as A, B and C. Elements A and B do not change the orientation of the bilayer (except for possible rotation by 120°, which does not change the lattice and is ignored hereafter); the only difference between A and B is shift of the lattice. Element C, however, twists the lattice by 60°.
Using those A,B,C elements, we can construct any SiC polytype. Shown above are examples of the hexagonal polytypes 2H, 4H and 6H as they would be written in the Ramsdell notation where the number indicates the layer and the letter indicates the Bravais lattice. The 2H-SiC structure is equivalent to that of wurtzite and is composed of only elements A and B stacked as ABABAB. The 4H-SiC unit cell is two times longer, and the second half is twisted compared to 2H-SiC, resulting in ABCB stacking. The 6H-SiC cell is three times longer than that of 2H, and the stacking sequence is ABCACB. The cubic 3C-SiC, also called β-SiC, has ABC stacking.
Physical properties
The different polytypes have widely ranging physical properties. 3C-SiC has the highest electron mobility and saturation velocity because of reduced phonon scattering resulting from the higher symmetry. The band gaps differ widely among the polytypes ranging from 2.3 eV for 3C-SiC to 3 eV in 6H SiC to 3.3 eV for 2H-SiC. In general, the greater the wurtzite component, the larger the band gap. Among the SiC polytypes, 6H is most easily prepared and best studied, while the 3C and 4H polytypes are attracting more attention for their superior electronic properties. The polytypism of SiC makes it nontrivial to grow single-phase material, but it also offers some potential advantages - if crystal growth methods can be developed sufficiently then heterojunctions of different SiC polytypes can be prepared and applied in electronic devices.
Summary of polytypes
All symbols in the SiC structures have a specific meaning: The number 3 in 3C-SiC refers to the three-bilayer periodicity of the stacking (ABC) and the letter C denotes the cubic symmetry of the crystal. 3C-SiC is the only possible cubic polytype. The wurtzite ABAB... stacking sequence is denoted as 2H-SiC, indicating its two-bilayer stacking periodicity and hexagonal symmetry. This periodicity doubles and triples in 4H- and 6H-SiC polytypes. The family of rhombohedral polytypes is labeled R, for example, 15R-SiC.
See also
Silicon carbide fibers
References
External links
A Brief History of Silicon Carbide Dr J F Kelly, University of London
Material Safety Data Sheet for Silicon Carbide
Carbides
Polymorphism (materials science) | Polymorphs of silicon carbide | [
"Materials_science",
"Engineering"
] | 895 | [
"Polymorphism (materials science)",
"Materials science"
] |
23,240,037 | https://en.wikipedia.org/wiki/Antithetic%20variates | In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-over square root convergence, a very large number of sample paths is required to obtain an accurate result. The antithetic variates method reduces the variance of the simulation results.
Underlying principle
The antithetic variates technique consists, for every sample path obtained, in taking its antithetic path — that is given a path to also take . The advantage of this technique is twofold: it reduces the number of normal samples to be taken to generate N paths, and it reduces the variance of the sample paths, improving the precision.
Suppose that we would like to estimate
For that we have generated two samples
An unbiased estimate of is given by
And
so variance is reduced if is negative.
Example 1
If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be , where, for any given i, is obtained from U(0, 1). The second sample is built from , where, for any given i: . If the set is uniform along [0, 1], so are . Furthermore, covariance is negative, allowing for initial variance reduction.
Example 2: integral calculation
We would like to estimate
The exact result is . This integral can be seen as the expected value of , where
and U follows a uniform distribution [0, 1].
The following table compares the classical Monte Carlo estimate (sample size: 2n, where n = 1500) to the antithetic variates estimate (sample size: n, completed with the transformed sample 1 − ui):
{| cellspacing="1" border="1"
|
| align="right" | Estimate
| align="right" | standard error
|-
| Classical Estimate
| align="right" | 0.69365
| align="right" | 0.00255
|-
| Antithetic Variates
| align="right" | 0.69399
| align="right" | 0.00063
|}
The use of the antithetic variates method to estimate the result shows an important variance reduction.
See also
Control variates
References
Variance reduction
Computational statistics
Monte Carlo methods | Antithetic variates | [
"Physics",
"Mathematics"
] | 486 | [
"Monte Carlo methods",
"Computational statistics",
"Computational mathematics",
"Computational physics"
] |
40,164,004 | https://en.wikipedia.org/wiki/Todd%20Weather%20Folios | The Todd Weather Folios are a collection of continental Australian synoptic charts that were published from 1879 to 1909.
The charts were created by Sir Charles Todd's office at the Adelaide Observatory. In addition to the charts, the folios include clippings of newspaper articles and telegraphic and handwritten information about the weather. The area covered is mainly the east and south-east of Australia, with occasional reference to other parts of Australasia and the world.
The maps are bound into approximately six-month folios, 63 of which cover the entire period. There are approximately 10,000 continental weather maps along with 750 rainfall maps for South Australia, 10 million printed words of news text, and innumerable handwritten observations and correspondences about the weather.
The folios are an earlier part of the National Archives of Australia listed collection series number D1384.
The History of the Folios
With the advent of the telegraph it was possible to simultaneously collect data, such as surface temperature and sea-level pressure, to draw synoptic weather charts. With Charles Todd's appointment as Postmaster General to the Colony, he trained not only his telegraph operators, but also his postmasters as weather observers. These observers provided valuable data points that, in combination with telegraphed observations from the other colonies (including New Zealand), showed the development and progress of weather activity across a large part of the Southern Hemisphere. Todd's best known feat was his construction management of the Overland Telegraph from Adelaide to Port Darwin. This line of communication was critical to his capacity to create continent-wide synoptic charts as the telegraphic observations from the Outback enabled the connection of data points on the east coast of Australia with similar data points on the west and southern coasts. These continent-scale isobaric lines allowed Todd and his staff to draw synoptic charts that in the early 1880s had a greater breadth than any (known) synoptic charts drawn elsewhere in the world.
The folios grew out of Todd's desire to inform the colonists of South Australia of the immense size of weather systems and that in southern Australia, they generally progressed from west to east and not from east to west as commonly assumed by the early colonists. To accomplish this, Todd displayed daily the last 6 synoptic charts for public viewing then bound and stored them in the folios.
The Todd weather folios consist not only of synoptic charts, but also include clippings from newspapers detailing weather statistics and events for all the eastern colonies of Australia. Newspapers from Brisbane, Sydney and Melbourne were collected as they came off the inter-colonial trains and were processed for pasting up next to the corresponding synoptic chart.
The collection from 1879 includes the earliest use of isobaric maps. It then develops through to the first maps posted for public consumption in the mid 1880s, and finishes with the ‘production maps’ of pre-Federalised weather observations and forecasting. The maps are accompanied by other information including the first in-house forecasts (and later published forecasts), early rainfall maps, weather observations from the logs of sailing ships, and telegrams and letters about significant weather events.
Digitising the Collection
As the original documents are in a fragile state and not easily accessible, a team of volunteers of the Australian Meteorological Society (AMETA) hosted by the Australian Bureau of Meteorology, has digitally
imaged the full 31-year run of Todd's charts and accompanying text. The digital images have been handed to the National Archives of Australia for inclusion in the Australian Digital Heritage collection. Access to the 26,000 high quality images is also available on-line.
The volunteer group has also digitised data from the Todd folios which have been forwarded for inclusion in the International Surface Pressure Databank (ISPD). This has been done as part of Project ACRE(Atmospheric Circulation Reconstructions over the Earth) of the Climate Monitoring and Attribution Group, Meteorology Office Hadley Centre, UK. ACRE exists to gather data to fuel a weather ‘backcasting’ model extending back to 1750. The Todd folios contain data of value to this initiative, data that is no longer available through other records. In many cases, the original documents containing the data recorded by weather observers are no longer in existence or are irretrievably lost, which gives significance to their recording in Todd's synoptic charts and ancillary documents.
Three key concerns have driven the project; they are to make this historical archive discoverable, accessible, and future-proofed. In an electronic format on the internet, discoverability and accessibility are greatly enhanced. With the National Archives agreement to store the images, future-proofing the electronic images is assured.
References
External links
Todd Weather Folios
AMETA The Australian Meteorological Association Inc.
International Surface Pressure Databank(ISPD)
Historical climatology
Climate of Australia
Climate and weather statistics
Meteorological data and networks | Todd Weather Folios | [
"Physics"
] | 1,009 | [
"Weather",
"Physical phenomena",
"Climate and weather statistics"
] |
40,167,806 | https://en.wikipedia.org/wiki/Kilonova | A kilonova (also called a macronova) is a transient astronomical event that occurs in a compact binary system when two neutron stars or a neutron star and a black hole merge. These mergers are thought to produce gamma-ray bursts and emit bright electromagnetic radiation, called "kilonovae", due to the radioactive decay of heavy r-process nuclei that are produced and ejected fairly isotropically during the merger process.
The measured high sphericity of the kilonova AT2017gfo at early epochs was deduced from the blackbody nature of its spectrum.
History
The existence of thermal transient events from neutron star mergers was first introduced by Li & Paczyński in 1998. The radioactive glow arising from the merger ejecta was originally called mini-supernova, as it is to the brightness of a typical supernova, the self-detonation of a massive star. The term kilonova was later introduced by Metzger et al. in 2010 to characterize the peak brightness, which they showed reaches 1000 times that of a classical nova.
The first candidate kilonova to be found was detected on June 3, 2013 as short gamma-ray burst GRB 130603B by instruments on board the Swift Gamma-Ray Burst Explorer and KONUS/WIND spacecraft, and then imaged by the Hubble Space Telescope 9 and 30 days later.
On October 16, 2017, the LIGO and Virgo collaborations announced the first detection of a gravitational wave (GW170817) which would correspond with electromagnetic observations, and demonstrated that the source was a binary neutron star merger. This merger was followed by a short GRB (GRB 170817A) and a longer lasting transient visible for weeks in the optical and near-infrared electromagnetic spectrum (AT 2017gfo), located only 140 million light-years away in the nearby galaxy NGC 4993. Observations of AT 2017gfo confirmed that it was the first conclusive observation of a kilonova. Spectral modelling of AT2017gfo identified the r-process elements strontium and yttrium, which conclusively ties the formation of heavy elements to neutron-star mergers. Further modelling showed the ejected fireball of heavy elements was highly spherical in early epochs. Some researchers have suggested that "thanks to this work, astronomers could use kilonovae as a standard candle to measure cosmic expansion. Since kilonovae explosions are spherical, astronomers could compare the apparent size of a supernova explosion with its actual size as seen by the gas motion, and thus measure the rate of cosmic expansion at different distances."
Theory
The inspiral and merging of two compact objects are a strong source of gravitational waves (GW). The basic model for thermal transients from neutron star mergers was introduced by Li-Xin Li and Bohdan Paczyński in 1998. In their work, they suggested that the radioactive ejecta from a neutron star merger is a source for powering thermal transient emission, later dubbed kilonova.
Observations
A first observational suggestion of a kilonova came in 2008 following the gamma-ray burst GRB 080503, where a faint object appeared in optical light after one day and rapidly faded. However, other factors such as the lack of a galaxy and the detection of X-rays were not in agreement with the hypothesis of a kilonova. Another kilonova was suggested in 2013, in association with the short-duration gamma-ray burst GRB 130603B, where the faint infrared emission from the distant kilonova was detected using the Hubble Space Telescope.
In October 2017, astronomers reported that observations of AT 2017gfo showed that it was the first secure case of a kilonova following a merger of two neutron stars.
In October 2018, astronomers reported that GRB 150101B, a gamma-ray burst event detected in 2015, may be analogous to the historic GW170817. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are considered "striking", and this remarkable resemblance suggests the two separate and independent events may both be the result of the merger of neutron stars, and both may be a hitherto-unknown class of kilonova transients. Kilonova events, therefore, may be more diverse and common in the universe than previously understood, according to the researchers. In retrospect, GRB 160821B, a gamma-ray burst detected in August 2016, is now believed to also have been due to a kilonova, by its resemblance of its data to AT2017gfo.
A kilonova was also thought to have caused the long gamma-ray burst GRB 211211A, discovered in December 2021 by Swift’s Burst Alert Telescope (BAT) and the Fermi Gamma-ray Burst Monitor (GBM). These discoveries challenge the formerly prevailing theory that long GRBs exclusively come from supernovae, the end-of-life explosions of massive stars. GRB 211211A lasted 51s; GRB 191019A (2019) and GRB 230307A (2023), with durations of around 64s and 35s respectively, have been also argued to belong to this class of long GBRs from neutron star mergers.
In 2023, GRB 230307A was observed and associated with tellurium and lanthanides.
See also
Hypernova
Nova
R-process
Supernova
Supernova impostor
References
Astronomical events
Neutron stars
Star types
Stellar phenomena | Kilonova | [
"Physics",
"Astronomy"
] | 1,152 | [
"Physical phenomena",
"Astronomical events",
"Astronomical classification systems",
"Stellar phenomena",
"Star types"
] |
40,170,171 | https://en.wikipedia.org/wiki/The%20Path%20to%20Degrowth%20in%20Overdeveloped%20Countries | "The Path to Degrowth in Overdeveloped Countries", written by Erik Assadourian, is the second chapter of the Worldwatch Institute's State of the World (2012), available for free online.
In his chapter of the report, Assadourian defines degrowth as an "essential and urgent" economic strategy to pursue in countries entrenched in overdevelopment (such as the United States) in order for those countries to be truly sustainable and adapt to "The rapidly warming Earth and the collapse of ecosystem services." Furthermore, he hopes to dispel "the myth that perpetual pursuit of growth is good for economies or the societies of which they are a part" for the well-being of the planet, of underdeveloped populations, and of the sick, stressed, and overweight populations of overdeveloped countries. Assadourian argues via the principle of plenitude that degrowth will inevitably occur whether we want it to or not because—on a planet of finite resources—economies and populations cannot grow infinitely, and overdeveloped countries are still pursuing more economic growth and overconsuming resources.
Assadourian outlines four policies overdeveloped nations could employ to sufficiently facilitate a planned and controlled contraction of the economy so as to get back in line with planetary boundaries. Each of these, in unison, will eventually foster the creation of a steady-state economy that is in balance with Earth's limits:
Reduce overall consumption by overconsumers
Distribute tax burdens more equitably
Share work hours better
Cultivate a plenitude economy: "informalize" certain sectors of the economy
Assadourian also wrote a two-page policy brief on the chapter highlighting the key messages of, the problem regarding, and points to keep in mind moving forward on our path to degrowth.
See also
Worldwatch Institute
Prosperity Without Growth
Degrowth
Post growth
Sustainable living
Steady state economy
Ecological economics
Informal economy
References
External links
Full text of "The Path to Degrowth in Overdeveloped Countries"
2012 in the environment
Degrowth
Sustainability books
Environmental non-fiction books | The Path to Degrowth in Overdeveloped Countries | [
"Environmental_science"
] | 431 | [
"Degrowth",
"Environmental ethics"
] |
34,902,119 | https://en.wikipedia.org/wiki/Laser%20flash%20analysis | The laser flash analysis or laser flash method is used to measure thermal diffusivity of a variety of different materials. An energy pulse heats one side of a plane-parallel sample and the resulting time dependent temperature rise on the backside due to the energy input is detected. The higher the thermal diffusivity of the sample, the faster the energy reaches the backside. A laser flash apparatus (LFA) to measure thermal diffusivity over a broad temperature range, is shown on the right hand side.
In a one-dimensional, adiabatic case the thermal diffusivity is calculated from this temperature rise as follows:
Where
is the thermal diffusivity in cm2/s
is the thickness of the sample in cm
is the time to the half maximum in s
As the coefficient 0.1388 is dimensionless, the formula works also for and in their corresponding SI units.
Measurement principle
The laser flash method was developed by Parker et al. in 1961.
In a vertical setup, a light source (e.g. laser, flashlamp) heats the sample from the bottom side and a detector on top detects the time-dependent temperature rise. For measuring the thermal diffusivity, which is strongly temperature-dependent, at different temperatures the sample can be placed in a furnace at constant temperature.
Perfect conditions are
homogeneous material,
a homogeneous energy input on the front side
a time-dependent short pulse – in form of a Dirac delta function
Several improvements on the models have been made. In 1963 Cowan takes radiation and convection on the surface into account.
Cape and Lehman consider transient heat transfer, finite pulse effects and also heat losses in the same year.
Blumm and Opfermann improved the Cape-Lehman-Model with high order solutions of radial transient heat transfer and facial heat loss, non-linear regression routine in case of high heat losses and an advanced, patented pulse length correction.
See also
Thermal conductivity
Thermal conductivity measurement
Thermal diffusivity
Thermal physics
References
Materials testing
Heat transfer
Heat conduction | Laser flash analysis | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 412 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Materials science",
"Materials testing",
"Thermodynamics",
"Heat conduction"
] |
34,904,390 | https://en.wikipedia.org/wiki/Acoustic%20phase%20conjugation | Acoustic phase conjugation is a set of techniques meant to perform phase conjugation on acoustic waves.
Techniques
Acoustic phase conjugation can appear in a solid when the sound velocity is modulated by an electromagnetic field. The generation of the conjugate wave can be seen as the decay of a photon into two phonons, as seen on the diagram. The two phonons have opposite wave vectors k and -k (they will propagate in opposite directions) and a frequency two times smaller than that of the photon.
Parametric pumping techniques can be performed in several media:
In piezoelectric crystals, a nonlinear piezoelectric effect will produce a modulation of fractions of a percent.
In magnetic crystals, a modulation of dozens of percent can be reached through the magneto-acoustic coupling, which can be improved by combining the magnetostriction and spin reorientation transition effects or using the magnetoacoustic resonance. A "supercritical" or "giant" amplification, up to 80 dB, can be obtained beyond the threshold of instability of phonons in magnetoacoustic media.
In semiconductors, parametric interaction between phonons and plasmon can be generated by an alternative electric field or a modulated optical pump.
Applications
The auto-compensation of phase distortion and auto-focusing properties of the conjugate wave are used in non-destructive testing techniques. In medical therapy, they can be combined with giant amplification for tumor destruction, like lithotripsy and hyperthermia therapy.
Acoustic imaging can be improved by applying selective phase conjugation on some harmonics of the incident wave. This narrows the focal distribution of those harmonics and reduces the sidelobes and reverberation noise, thus increasing the image resolution.
Selective acoustic phase conjugation can be used to detect isoechogenic objects whose nonlinear parameters differ from that of the medium. The linear acoustic properties of such objects are close to that of the medium which make them invisible with traditional echography techniques.
Another field of application is nonlinear ultrasonic velocimetry, one order of magnitude more precise than with the usual Doppler effect. Phase conjugate velocimeters have proved to correctly measure the flow velocity in the case of laminar flows in tubes, vortex flows under rotating disks and immersed jets in water.
References
See also
Phase conjugation
Optical phase conjugation
Time Reversal Signal Processing
Wave mechanics | Acoustic phase conjugation | [
"Physics"
] | 501 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
31,794,032 | https://en.wikipedia.org/wiki/Atlantic%20Equatorial%20mode | The Atlantic Equatorial Mode or Atlantic Niño is a quasiperiodic interannual climate pattern of the equatorial Atlantic Ocean. It is the dominant mode of year-to-year variability that results in alternating warming and cooling episodes of sea surface temperatures accompanied by changes in atmospheric circulation. The term Atlantic Niño comes from its close similarity with the El Niño-Southern Oscillation (ENSO) that dominates the tropical Pacific basin. For this reason, the Atlantic Niño is often called the little brother of El Niño. The Atlantic Niño usually appears in northern summer, and is not the same as the Atlantic Meridional (Interhemispheric) Mode that consists of a north-south dipole across the equator and operates more during northern spring. The equatorial warming and cooling events associated with the Atlantic Niño are known to be strongly related to rainfall variability over the surrounding continents, especially in West African countries bordering the Gulf of Guinea. Therefore, understanding of the Atlantic Niño (or lack thereof) has important implications for climate prediction in those regions. Although the Atlantic Niño is an intrinsic mode to the equatorial Atlantic, there may be a tenuous causal relationship between ENSO and the Atlantic Niño in some circumstances.
Background and Structure
Global tropical variability is dominated by ENSO in the equatorial Pacific. This phenomenon results from air-sea interaction, producing a coupled atmosphere-ocean system that oscillates with periods on the order of three to five years. However, the physical basis for this oscillation is not limited strictly to the Pacific basin, and indeed, a very similar mode of variability exists in the equatorial Atlantic, albeit on a smaller scale.
The Atlantic Niño is characterized by a sea surface temperature anomaly centered on the equator between 0° and 30°W. Unlike its Pacific counterpart, the Atlantic Niño does not have sea surface temperature anomalies that switch sign from east to west, but rather a single basin-wide anomaly. Additionally, the amplitude of the Atlantic Niño tends to be about half that of El Niño. Not surprisingly, this sea surface temperature anomaly is closely related to a change in the climatological trade winds. A warm anomaly is associated with relaxed trade winds across a large swath of the equatorial Atlantic basin, while a cool anomaly is associated with enhanced easterly wind stress in the same region. These trade wind fluctuations can be understood as the weakening and strengthening of the Atlantic Walker circulation. This is strikingly similar to the wind stress anomalies seen in the Pacific during El Niño (or La Niña) events, although centered farther west in the Atlantic basin. A major difference between El Niño and the Atlantic Niño is that the sea surface temperature anomalies are strictly constrained to the equator in the Atlantic case, while greater meridional extent is observed in the Pacific.
While the spatial characteristics of the mature Atlantic Niño are quite similar to its Pacific counterpart, its temporal variability is somewhat different. The Atlantic Niño varies on interannual timescales like El Niño but also shows more variance on seasonal and annual timescales. That is to say, and the Atlantic Niño explains a smaller portion of the total variance in the equatorial Atlantic than does El Niño in the equatorial Pacific. This is because seasonal climate events are superimposed on interannual variability. The Atlantic Niño typically reaches a mature phase in boreal summer (though there are exceptions), while El Niño matures in boreal winter. The development of the Atlantic Niño tends to be marked by emerging stationary patterns centered mid-basin. This is in stark contrast to El Niño, which can often develop as warm sea surface temperature anomalies that migrate west from the coast of South America or migrate east from the central Pacific.
Impact on African Climate
Warming or cooling of the equatorial oceans has understandable consequences for atmospheric climate. The equatorial oceans comprise a major portion of the overall heat budget and, therefore, alter convective regimes near the equator. In the case of the Pacific El Niño, enhanced convection over the central Pacific and reduced convection over the Maritime Continent fundamentally change climate not just in the tropics, but globally. Since the Atlantic Niño is physically similar to ENSO, we might expect climate impacts from it as well. However, given its reduced size both spatially (the Atlantic basin is much smaller than the Pacific basin) and in magnitude, the climate impacts of the Atlantic Niño are best seen in the tropical and subtropical regions nearest to the equatorial Atlantic.
The impact of the Atlantic Niño on African climate can be best understood by assessing how above normal equatorial sea surface temperatures impact the seasonal migration of the Intertropical Convergence Zone (ITCZ). Warm equatorial sea surface temperatures lower surface air pressure which induces more equatorward flow than normal. This, in turn, prevents the ITCZ from migrating as far north as it would under normal conditions during the summer, reducing rainfall in the semi-arid Sahel to the north, and increasing rainfall in regions along the Gulf of Guinea. Increased rainfall relative to normal is typically associated with negative temperature anomalies over these tropical land areas. Some evidence suggests that a warming trend in Indian Ocean equatorial sea surface temperatures contributes to long-term drying of the Sahel, which is exacerbated by periodic warming of the equatorial Atlantic related to the Atlantic Niño. In fact, the ability to predict the Atlantic Niño is a major research question given its impact on seasonal climate.
Relationship Between El Niño and the Atlantic Niño
Global tropical variability is largely dominated by the Pacific El Niño, leaving as a valid question whether the Atlantic Niño might be a remote impact of El Niño. There is no apparent contemporaneous relationship between the two, but such a statement is not necessarily useful considering that El Niño peaks in winter while the Atlantic Niño peaks in summer. Lagged analyzes reveal that the most prominent El Niño impact on the tropical Atlantic the following spring and summer is a warm sea surface temperature anomaly centered north of the Atlantic Niño region. This again appears to suggest that there is not causal relationship. However, more rigorous analysis suggests that the competition between cooling that results from increased wind stress and warming that results from increased air temperature, both of which are remote impacts of El Niño on the Atlantic, accounts for a tenuous relationship. When one of these processes dominates over the other, an Atlantic Niño (warm or cool) event could ensue. This is of major interest considering the challenge in seasonal prediction of the Atlantic Niño.
Spatiotemporal Diversity of Atlantic Niño
Not all Atlantic Niño events are alike. Some appear earlier than others or persists longer. These variabilities during the onset and dissipation phases are well captured by the four most recurring Atlantic Niño flavors or varieties (i.e., early-terminating, persistent, early-onset and late-onset varieties). Largely consistent with the differences in the timings of onset and dissipation, these four varieties display remarkable differences in rainfall response over West Africa and South America. In particular, the persistent and late-onset varieties are characterized by strong equatorial Atlantic sea surface temperature anomalies that remain until the end of the year. Thus, they are linked to an extended period of increased rainfall over the West Africa sub-Sahel region (July - October). In comparison, the early-terminating and early-onset varieties are linked to a limited period of increased rainfall over the West Africa sub-Sahel region (July - August). Most of the varieties are subject to onset mechanisms that involve preconditioning in boreal spring by either the Atlantic Meridional Mode (early-terminating variety) or Pacific El Niño (persistent and early-onset varieties), while for the late onset variability there is no clear source of external forcing.
See also
Climate cycle
Teleconnection
Benguela Niño
Equatorial Counter Current
Western Hemisphere Warm Pool
Notes
References
External links
El Niño, South American Monsoon, and Atlantic Niño links as detected by a decade of QuikSCAT, TRMM and TOPEX/Jason Observations
From El Nino to Atlantic Nino: pathways as seen in the QuikScat winds
Atlantic Ocean
Physical oceanography
Tropical meteorology
Regional climate effects
Atmospheric dynamics | Atlantic Equatorial mode | [
"Physics",
"Chemistry"
] | 1,631 | [
"Atmospheric dynamics",
"Applied and interdisciplinary physics",
"Physical oceanography",
"Fluid dynamics"
] |
31,794,459 | https://en.wikipedia.org/wiki/Simple%20Model%20of%20the%20Atmospheric%20Radiative%20Transfer%20of%20Sunshine | The Simple Model of the Atmospheric Radiative Transfer of Sunshine (SMARTS) is a computer program designed to evaluate the surface solar irradiance components in the shortwave spectrum (spectral range 280 to 4000 nm) under cloudless conditions. The program, written in FORTRAN, relies on simplifications of the equation of radiative transfer to allow extremely fast calculations of the surface irradiance. The irradiance components can be incident on a horizontal, a fixed-tilt or a 2-axis tracking surface. SMARTS can be used for example to evaluate the energy production of solar panels under variable atmospheric conditions. Many other applications are possible.
History
The first versions of SMARTS were developed by Dr. Gueymard while he was at the Florida Solar Energy Center. The model employed a structure similar to the earlier SPCTRAL2 model, still offered by the National Renewable Energy Laboratory (NREL), but with finer spectral resolution, as well as updated extraterrestrial spectrum and transmittance functions. The latter consisted mostly of parameterizations of results obtained with MODTRAN.
The latest versions (2.9.2 and 2.9.5) of SMARTS are hosted by NREL. The program can be freely downloaded but is subject to a License Agreement, which limits its use to civilian research and education. For new users, an optional graphical interface (for Windows OS only) is available to ease the preparation of the input file. Program packages are available for the Windows, Macintosh, and Linux platforms.
Applications
SMARTS version 2.9.2 was selected to prepare various reference terrestrial spectra, which have been standardized by ASTM under the designations G173, G177 and G197, and by IEC under 60904-3. The latter standard represents the spectral distribution of global irradiance incident on a 37° tilted surface facing the sun at an air mass of 1.5. The integrated irradiance amounts to 1000 W/m2. This standard spectrum is mandated by IEC to evaluate the rating of photovoltaic (PV) solar cells in the absence of optical concentration. PV cells requiring concentration referred to as CPV cells are normally evaluated against the direct spectrum at air mass 1.5 described in ASTM G173. This spectrum integrates to 900 W/m2. The reasons behind the selection of the atmospheric and environmental conditions that eventually led to the development of ASTM G173 are described in a scientific paper. SMARTS version 2.9.2 is considered an adjunct standard to G173 by ASTM. Further details on the use of SMARTS for PV or CPV applications are available in other publications. In particular, the model is frequently used to evaluate real-world efficiencies of PV or CPV modules and evaluate mismatch factors.
The reference spectra in ASTM G197 have been developed to evaluate the optical characteristics of fenestration devices when mounted vertically (windows) or on structures inclined at 20° from the horizontal (skylights on roofs).
The reference spectrum in ASTM G177 is limited to the global irradiance in the ultraviolet (280–400 nm), and corresponds to "high-UV" conditions frequently encountered in arid and elevated sites, such as in the southwest USA. This spectrum is to be used as a reference for testing the degradation and durability of materials.
Features
The program uses various inputs that describe the atmospheric conditions for which the irradiance spectra are to be calculated. Ideal conditions, based on various possible model atmospheres and aerosol models, can be selected by the user. Alternatively, realistic conditions can also be specified as inputs, based for example on aerosol and water vapor data provided by a sunphotometer. In turn, these realistic conditions are necessary to compare the modeled spectra to those measured by a spectroradiometer. Reciprocally, since the model is well validated, this comparative method can be used as guidance to detect malfunction or miscalibration of instruments. The original spectral resolution of the model is 0.5 nm in the UV, 1 nm in the visible and near-infrared, and 5 nm above 1700 nm. To facilitate comparisons between the modeled spectra and actual measurements at a different spectral resolution, the SMARTS post-processor may be used to smooth the modeled spectra and adapt them to simulate the optical characteristics of a specific spectroradiometer. Additionally, the model provides the spectrally-integrated (or "broadband") irradiance values, which can then be compared to measurements from a pyrheliometer (for direct radiation) or pyranometer (for diffuse or global radiation) at any instant.
Besides the atmospheric conditions, another important input is the solar geometry, which can be defined by the sun position (zenith angle and azimuth), the air mass, or by specifying the date, time, and location.
Optional calculations include the circumsolar irradiance, illuminance components, photosynthetically active radiation (PAR) components, and irradiance calculations in the UV, involving a variety of action spectra (such as that corresponding to the erythema).
The program outputs its results to text files, which can be further imported and processed into spreadsheets. A graphic interface, providing plots of the calculated spectra using National Instruments' LabVIEW software, is also available.
See also
Air mass (solar energy)
Atmosphere of Earth
Concentrated photovoltaics
Diffuse sky radiation
Electromagnetic radiation and health
Illuminance
Insolation
Irradiance
List of atmospheric radiative transfer codes
MODTRAN
Rayleigh scattering
Sunlight
Sunshine
References
External links
Official website : http://www.solarconsultingservices.com/smarts.php
Download website : http://www.nrel.gov/rredc/smarts/
Electromagnetic radiation
Atmospheric radiative transfer codes | Simple Model of the Atmospheric Radiative Transfer of Sunshine | [
"Physics"
] | 1,205 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation"
] |
31,797,223 | https://en.wikipedia.org/wiki/Association%20for%20Renewable%20Energy%20and%20Clean%20Technology | The Association for Renewable Energy and Clean Technology, previously known as Renewable Energy Association (REA), is a renewable energy and clean technology trade association in the UK encompassing all of renewables industry in the United Kingdom. REA covers renewable power & flexibility, heat and cooling, circular bioresources and transport. The REA is a not-for-profit company.
History
The Renewable Power Association was established in 2001 as a not-for-profit trade association, representing British renewable energy producers and promoting the use of renewable energy in the UK. The company changed its name in November 2005 to Renewable Energy Association. Renewable Energy Association was merged with the Association for Organics Recycling (AfOR) in September 2012, the latter becoming the "Organics Recycling Group" under REA. The Company name was officially changed again to The Association for Renewable Energy and Clean Technology in October 2019.
Activities
Lobbying and Regulation
All-Party Parliamentary Group (APPG) on Energy Storage, for which the REA was the secretariat between its founding in July 2015 and its last registration in November 2019.
All-Party Parliamentary Group (APPG) on Electric Vehicles, for which the REA is the secretariat since November 2017.
The campaign for net-zero aviation in the UK, led by the Jet Zero Council, which is a partnership between industry and the UK Government with the aim of delivering zero-emission transatlantic flight within a generation. Dr Nina Skorupska CBE is a member of this council in her capacity as the CEO of the REA.
REA has reported significant reduction in greenhouse gases can be obtained by use of biofuels rather than fossil fuels.
Government Criticism
REA has been critical of the UK Governments' lack of funding for the production of electricity from organic waste, a failure to define policies to meet the European-wide energy targets, reductions to the Feed-in tariff and the lack of a robust framework for renewables.
REA supported the environmental audit committee in calling for the government to cut VAT on repairs fort electrical goods and green home improvements.
On 21 September 2021 REA published its report Energy Transition Readiness Index 2021 and warned that urgent action was needed make the UK Electricity grid more flexible to cate for more variable types of energy coming online, one of the points raised that as electricity storage facilities were treated as generators and charge both for transmission of electricity to and from the storage over the grid which was a disincentive for investment in the technology.
Safety
The REA provides guidance on health and safety at operational sites.
During the Covid-19 pandemic, the REA aimed to support its members and others by making the UK Government aware of the impact of the pandemic and lockdown restrictions on industry, networking between members to dispose of additional food and drinks waste caused by the closure of restaurants, and providing details to members of financial support available.
Standards
The British Standard’s Institute (BSI) Publicly Available Specifications (PAS) 100 & 110, concerning compost quality and anaerobic digestate quality.
REA, through its subsidiary, launched the UK's first Electric Vehicle Consumer Code (EVCC) in 2020, a voluntary scheme for domestic charge point installers.
Conventions
Green Gas Day, which is the UK’s largest green gas industry gathering, organised in collaboration with CNG Services Ltd, and hosted since 2012 at the National Motorcycle Museum in Birmingham, UK.
Biofuels
Biofuels are one area within REA's scope and some elements have proved controversial. In 2014 REA was criticised for encouraging reliance on large non renewable energy company members including the operators of Drax power station and Eggborough power station and lobbying to expand the use of food crops as biofuels including palm oil and soya.
References
External links
Official website
Sources
Trade associations based in the United Kingdom
Organizations established in 2001
2001 establishments in the United Kingdom
Renewable energy in the United Kingdom
Biofuel in the United Kingdom
Renewable energy organizations
London Borough of Lambeth | Association for Renewable Energy and Clean Technology | [
"Engineering"
] | 801 | [
"Renewable energy organizations",
"Energy organizations"
] |
31,797,677 | https://en.wikipedia.org/wiki/Central%20limit%20theorem%20for%20directional%20statistics | In probability theory, the central limit theorem states conditions under which the average of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed.
Directional statistics is the subdiscipline of statistics that deals with directions (unit vectors in Rn), axes (lines through the origin in Rn) or rotations in Rn. The means and variances of directional quantities are all finite, so that the central limit theorem may be applied to the particular case of directional statistics.
This article will deal only with unit vectors in 2-dimensional space (R2) but the method described can be extended to the general case.
The central limit theorem
A sample of angles are measured, and since they are indefinite to within a factor of , the complex definite quantity is used as the random variate. The probability distribution from which the sample is drawn may be characterized by its moments, which may be expressed in Cartesian and polar form:
It follows that:
Sample moments for N trials are:
where
The vector [] may be used as a representation of the sample mean and may be taken as a 2-dimensional random variate. The bivariate central limit theorem states that the joint probability distribution for and in the limit of a large number of samples is given by:
where is the bivariate normal distribution and is the covariance matrix for the circular distribution:
Note that the bivariate normal distribution is defined over the entire plane, while the mean is confined to be in the unit ball (on or inside the unit circle). This means that the integral of the limiting (bivariate normal) distribution over the unit ball will not be equal to unity, but rather approach unity as N approaches infinity.
It is desired to state the limiting bivariate distribution in terms of the moments of the distribution.
Covariance matrix in terms of moments
Using multiple angle trigonometric identities
It follows that:
The covariance matrix is now expressed in terms of the moments of the circular distribution.
The central limit theorem may also be expressed in terms of the polar components of the mean. If is the probability of finding the mean in area element , then that probability may also be written .
References
Directional statistics
Asymptotic theory (statistics) | Central limit theorem for directional statistics | [
"Mathematics"
] | 458 | [
"Central limit theorem",
"Theorems in probability theory"
] |
27,992,940 | https://en.wikipedia.org/wiki/Pharmacognosy%20Reviews | Pharmacognosy Reviews is a peer-reviewed open-access medical journal published by Pharmacognosy Network Worldwide (Phcog.net). The journal publishes articles on the subject of pharmacognosy, natural products, and phytochemistry. It is indexed with Caspur, EBSCO, ProQuest, and Scopus.
Phcog.net appeared on Beall's list of predatory open-access journals from October 2012 through September 12, 2015.
References
External links
Phcog.net
Open access journals
Biannual journals
English-language journals
Pharmacology journals
Academic journals established in 2007
Medknow Publications academic journals
Pharmacognosy | Pharmacognosy Reviews | [
"Chemistry"
] | 144 | [
"Pharmacology",
"Pharmacognosy"
] |
27,994,073 | https://en.wikipedia.org/wiki/Plique-%C3%A0-jour | Plique-à-jour (French for "letting in daylight") is a vitreous enamelling technique where the enamel is applied in cells, similar to cloisonné, but with no backing in the final product, so light can shine through the transparent or translucent enamel. It is in effect a miniature version of stained-glass and is considered very challenging technically: high time consumption (up to 4 months per item), with a high failure rate. The technique is similar to that of cloisonné, but using a temporary backing that after firing is dissolved by acid or rubbed away. A different technique relies solely on surface tension, for smaller areas. In Japan the technique is known as shotai-jippo (shotai shippo), and is found from the 19th century on.
History
The technique was developed in the Byzantine Empire in 6th century AD. Some examples of Byzantine plique-à-jour survived in Georgian icons. The technique of plique-à-jour was adopted by Kievan Rus' (a strong trading partner of Constantinople) with other enamel techniques. Despite its complexity plique-à-jour tableware (especially "kovsh" bowls) was used by its aristocracy. Russian masters significantly developed plique-à-jour technique: in addition to cells cut in precious metal they worked with cells made of silver wire. Unfortunately the plique-à-jour technique of Kievan Rus' was lost after the crushing Mongol invasion in the 13th century. Some surviving examples are exhibited in the Historical Museum in Moscow.
Western Europe adopted the plique-à-jour technique (cells cut in metal) of Byzantium. The term smalta clara ("clear enamel"), probably meaning plique-à-jour appears in 1295 in the inventory of Pope Boniface VIII and the French term itself appears in inventories from the 14th century onwards. Benvenuto Cellini (1500–1571) gives a full description of the process in his Treatises of Benvenuto Cellini on Gold-smithing and Sculpture of 1568. Pre-19th century pieces are extremely rare because of their "extreme fragility ... which increases greatly with their size", and the difficulty of the technique. Survivals "are almost exclusively small ornamental pieces". The outstanding early examples that survive are "the decorative insets in the early fifteenth-century Mérode Cup (Burgundian cup) at the Victoria and Albert Museum in London, a Swiss early sixteenth-century plique-à-jour enamel plaque representing the family of the Virgin Mary in the Metropolitan Museum of Art in New York, and the eight pinnacle points over the front of the eleventh-century Saint Stephen's Crown in Hungary". The technique was lost in both Western and Eastern Europe.
The technique was revived in the late 19th century movement of revivalist jewellery, and became especially popular in Russia and Scandinavia. Works by Pavel Ovchinikov, Ivan Khlebnikov, and some masters working for Faberge are real masterpieces of plique-à-jour. Russian masters predominately worked with tableware. Norwegian jewellers included David Andersen and J. Tostrup in Oslo, and Martin Hummer in Bergen. Art Nouveau artists such as René Lalique, Lucien Gaillard and other French and German artists predominantly used plique-à-jour in small jewellery, though the Victoria & Albert Museum has a tray of 1901 by Eugène Feuillâtre (1870–1916).
Currently plique-à-jour is not often used, because it is challenging technically and mainly because of breaks in transferring skills from one generation of jewellers to the next. However, some luxury houses do produce limited numbers of products in the plique-à-jour technique, for example Tiffany in jewellery, and Bulushoff in jewellery and tableware. Works in the shotai shippo technique are also known from China and Iran.
Techniques
There are four basic ways of creating plique-à-jour:
1. Filigree plique-à-jour ("Russian plique-à-jour"): This is a building up process whereby a planned design is interpreted using gold or silver wires which are worked over a metal form (e.g. a bowl). Wires are twisted or engraved, i.e. have additional micro patterns. The wires are soldered together. Enamels are ground and applied to each "cell" created by the metal wirework. The piece is fired in a kiln. This process of placing and firing the enamels is repeated until all cells are completely filled. Usually it takes up to 15–20 repeats.
2. Pierced plique-à-jour ("Western plique-à-jour"): A sheet of gold or silver is pierced and sawed, cutting out a desired design. This leaves empty spaces or "cells" to fill with enamel powders (ground glass).
3. Shotai shippo ("Japanese plique-à-jour"): A layer of flux (clear enamel) is fired over a copper form. Wires are fired onto the flux (similar to cloisonné) and the resulting areas are enameled in the colors of choice. When all the enameling is finished, the copper base is etched away leaving a translucent shell of plique-à-jour.
4. Cloisonné on mica: Cells in precious metal are covered with fixed mica, which is removed by abrasives after enameling.
Process for cloisonné plique-à-jour on mica
Sample process
Notes
References
Campbell, Marian. An Introduction to Medieval Enamels, 1983, HMSO for V&A Museum,
Ostoia, Vera K., "A Late Mediaeval Plique-à-Jour Enamel", The Metropolitan Museum of Art Bulletin, New Series, Vol. 4, No. 3 (Nov. 1945), pp. 78–80, JSTOR
External links
Artistic techniques
Vitreous enamel | Plique-à-jour | [
"Chemistry"
] | 1,246 | [
"Coatings",
"Vitreous enamel"
] |
27,996,198 | https://en.wikipedia.org/wiki/Computational%20Resource%20for%20Drug%20Discovery | Computational Resources for Drug Discovery (CRDD) is an important module of the in silico module of Open Source for Drug Discovery (OSDD). The CRDD web portal provides computer resources related to drug discovery, predicting inhibitors, and predicting the ADME-Tox properties of molecules on a single platform. It caters to researchers researching computer-aided drug design by providing computational resources, and hosting a discussion forum.
One of the major objectives of CRDD is to promote open source software in the field of cheminformatics and pharmacoinformatics.
Features
Under CRDD, numerous resources related to computer-aided drug design have been collected and compiled. These resources are organized and presented on CRDD so users may locate resources from a single source.
Target identification provides resources important for searching drug targets with information on genome annotation, proteome annotation, potential targets, and protein structure.
Virtual screening compiles resources important for virtual screening as QSAR techniques, docking QSAR, cheminformatics, and siRNA/miRNA.
Drug design provides resources important for designing drug inhibitors/molecules, such as lead optimization, pharmacoinformatic, ADMET, and clinical informatics.
Community contribution
CRDD developed a platform where the community may contribute to the process of drug discovery.
DrugPedia is a wiki created for collecting and compiling information related to computer-aided drug design. It is developed under the umbrella of the OSDD project and covers a wide range of subjects around drugs like bioinformatics, cheminformatics, clinical informatics etc.
Indipedia: A wiki for collecting and compiling drug information related to India. It is intended is to provide comprehensive information about India created for Indians by Indians. It is developed under the umbrella of the OSDD project.
The CRDD Forum was launched to discuss the challenges in developing computational resources for drug discovery.
Indigenous development: software and web services
Beside collecting and compiling resources, CRDD members develop new software and web services. All services developed are free for academic use. The following are a few major tools developed at CRDD.
Development of databases
HMRBase: A manually curated database of hormones and their receptors. It is a compilation of sequence data after extensive manual literature search and from publicly available databases. HMRBase can be searched on the basis of a variety of data types. Owing to the high impact of endocrine research in the biomedical sciences, HMRBase could become a leading data portal for researchers. The salient features of HMRBase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, and online data submission. This database is integrated with DrugPedia so the public can contribute.
BIAdb: A database for Benzylisoquinoline Alkaloids. The Benzylisoquinoline Alkaloid Database serves to gather information related to the BIA's. Many BIA's show therapeutic properties and can be considered as potent drug candidates. This database will also serve researchers working in the field of synthetic biology, as developing medicinally important alkaloids using synthetic process is one of the important challenges. This database is also integrated with DrugPedia so the public can contribute.
Antigen DB: This database contain more than 500 antigens collected from literature and other immunological resources. These antigens come from 44 pathogenic species. In Antigen DB, a database entry contains information regarding the sequence, structure, origin, etc. of an antigen with additional information such as B and T-cell epitopes, MHC binding, function, gene-expression and post translational modifications, when available. AntigenDB also provides links to major internal and external databases.
PolysacDB: A databse dedicated to provide comprehensive information about antigenic polysaccharides of microbial origin (bacterial and fungal), antibodies against them, proposed epitopes, structural detail, proposed functions, assay system, cross-reactivity related information and more. It is a manually curated database where most of data has been collected from PubMed and PubMed Central literature databases.
TumorHoPe: TumorHoPe is a manually curated comprehensive database of experimentally characterized tumor homing peptides. These peptides recognize tumor tissues and tumor associated micro environments, including tumor metastasis.
ccPDB: A database designed to service researchers working in the field of function or structure annotation of proteins. This database of datasets is based on Protein Data Bank (PDB).
OSDDchem: This chemical database is an open repository of information on synthesized, semi-synthesized, natural, and virtually designed molecules from the OSDD community.
CancerDR: A database of 148 anticancer drugs and their effectiveness against around 1000 cancer cell lines. CancerDR maintains comprehensive information about these drugs, their target gene/protein, and cell lines.
Software developed
MycoTB: An extended flexible system concept for building standalone Windows software. The software allows users to build their own flexible systems on their personal computers to manage and annotate whole proteomes of MycoTB.
Resources created
CRAG: Computational resources for assembling genomes (CRAG) was created to assist users in assembling of genomes from short read sequencing (SRS). CRAG pursues the following major objectives:
Collection and compilation of computation resources
Brief description of genome assemblers
Maintaining SRS and related data
Service to community to assemble their genomes
CRIP: Computational resources for predicting protein–macromolecular interactions (CRIP) was developed to provide resources related to interaction. This site maintains a large number of resources on the interaction of proteins that includes protein–protein, protein–DNA, protein–ligand, protein–RNA.
BioTherapy: Bioinformatics for Therapeutic Peptides and Proteins (BioTherapi) was developed for researchers working in the field of protein/peptide therapeutics. The platform was created to provide a single platform for this area of research. This site includes relevant information about the use of peptides/proteins in drugs and synthesis of new peptides. It also covers problems in their formulation, synthesis and delivery processes.
HIVbio: HIV Bioinformatics (HIVbio) site contains various types of information on Human Immunodeficiency Virus (HIV) life cycle and Infection.
GDPbio: GDPbio (Genome based prediction of Diseases and Personal medicines using Bioinformatics) is a project focused on providing various resources related to genome analysis, particularly for the prediction of disease susceptibility of individuals and personalized medicine development, with the aim of public health improvement.
AminoFAST: Functional Annotation Tools for Amino Acids (AminoFAST) is a server designed to serve the bioinformatics community. Its aim is to develop as many tools as possible to understand the function of amino acids in proteins based on protein structure in PDB. The broad knowledge of protein function would help in the identification of novel drug targets.
Web services for cheminformatics
CRDD developed an open source platform which allows users to predict inhibitors against novel M. Tuberculosis drug targets and other important properties of drug molecules like ADMET. Following are list of few servers.
MetaPred: A webserver for the prediction of cytochrome P450 isoforms responsible for metabolizing a drug molecule. The MetaPred server predicts metabolizing CYP isoforms of a drug molecule/substrate based on SVM models developed using CDK descriptors. This server is intended to help researchers working in the field of drug discovery. The effort also demonstrates that it is possible to develop free web servers in the field of cheminformatics. This may encourage other researchers to develop web servers for public use, leading to decreased cost of discovering new drug molecules.
ToxiPred: A server for prediction of aqueous toxicity of small chemical molecules in T. pyriformis.
KetoDrug: A user friendly web server for binding affinity prediction of ketoxazole derivatives and small chemical molecules against Fatty Acid Amide Hydrolase (FAAH).
KiDoQ: A web server to serve researchers working in the field of designing inhibitors against dihydrodipicolinate synthase (DHDPS), a potential drug target enzyme of a unique bacterial DAP/Lysine pathway.
GDoQ: GDoQ (Prediction of GLMU inhibitors using QSAR and AutoDock) is an open source platform for predicting inhibitors against Mycobacterium tuberculosis (M.Tb) drug target N-acetylglucosamine-1-phosphate uridyltransferase (GLMU) protein. This is a potential drug target involved in bacterial cell wall synthesis. This server uses molecular docking and QSAR strategies to predict inhibitory activity value (IC50) of chemical compounds for GLMU protein.
ROCR: The ROCR is an R package for evaluating and visualizing classifier performance. It is a flexible tool for creating ROC graphs, sensitivity/specificity curves, area under curve and precision/recall curve. The parametrization can be visualized by coloring the curve according to cutoff.
WebCDK: A web interface for the CDK library which is used for predicting descriptors of chemicals.
Pharmacokinetics: This data analysis determines the relationship between the dosing regimen and the body's exposure to the drug as measured by the drug's nonlinear concentration time curve. It includes a function to calculate area under this curve. It also includes functions for half-life estimation for a biexponential model, and a two phase linear regression.
Prediction and analysis of drug targets
RNApred: Prediction of RNA binding proteins from its amino acid sequence.
ProPrint: Prediction of interaction between proteins from their amino acid sequence.
DomPrint: A domain-domain interaction (DDI) prediction server.
MycoPrint: A web interface for exploration of the interactome of Mycobacterium tuberculosis H37Rv (Mtb) predicted by the "Domain Interaction Mapping" (DIM) method.
ATPint: A server for predicting ATP interacting residues in proteins.
FADpred: Identification of FAD interacting residues in proteins.
GTPbinder: Prediction of protein GTP interacting residues.
NADbinder: Prediction of NAD binding residues in proteins.
PreMier: Software for predicting mannose interacting residues in proteins.
DMAP: Designing of mutants of antibacterial peptides.
icaars: Prediction and classification of aminoacyl tRNA synthetases using PROSITE domains.
CBtope: Prediction of conformational B-cell epitope in a sequence from its amino acid sequence.
DesiRM: Designing of Complementary and Mismatch siRNAs for silencing a gene.
GenomeABC: A server for benchmarking of genome assemblers.
References
Further reading
Bioinformatics software | Computational Resource for Drug Discovery | [
"Biology"
] | 2,246 | [
"Bioinformatics",
"Bioinformatics software"
] |
27,996,428 | https://en.wikipedia.org/wiki/Conjectural%20variation | In oligopoly theory, conjectural variation is the belief that one firm has an idea about the way its competitors may react if it varies its output or price. The firm forms a conjecture about the variation in the other firm's output that will accompany any change in its own output. For example, in the classic Cournot model of oligopoly, it is assumed that each firm treats the output of the other firms as given when it chooses its output. This is sometimes called the "Nash conjecture," as it underlies the standard Nash equilibrium concept. However, alternative assumptions can be made. Suppose you have two firms producing the same good, so that the industry price is determined by the combined output of the two firms (think of the water duopoly in Cournot's original 1838 account). Now suppose that each firm has what is called the "Bertrand Conjecture" of −1. This means that if firm A increases its output, it conjectures that firm B will reduce its output to exactly offset firm A's increase, so that total output and hence price remains unchanged. With the Bertrand Conjecture, the firms act as if they believe that the market price is unaffected by their own output, because each firm believes that the other firm will adjust its output so that total output will be constant. At the other extreme is the Joint-Profit maximizing conjecture of +1. In this case, each firm believes that the other will imitate exactly any change in output it makes, which leads (with constant marginal cost) to the firms behaving like a single monopoly supplier.
History
The notion of conjectures has maintained a long history in the Industrial Organization theory ever since the introduction of Conjectural Variations Equilibria by Arthur Bowley in 1924 and Ragnar Frisch (1933) (a useful summary of the history is provided by Giocoli). Not only are conjectural variations (henceforth CV) models able to capture a range of behavioral outcomes – from competitive to cooperative, but also they have one parameter which has a simple economic interpretation. CV models have also been found quite useful in the empirical analysis of firm behavior in the sense that they provide a more general description of firms behavior than the standard Nash equilibrium.
As Stephen Martin has argued:There is every reason to believe that oligopolists in different markets interact in different ways, and it is useful to have models that can capture a wide range of such interactions. Conjectural oligopoly models, in any event, have been more useful than game-theoretic oligopoly models in guiding the specification of empirical research in industrial economics.
Consistent conjectures
The CVs of firms determine the slopes of their reaction functions. For example, in the standard Cournot model, the conjecture is of a zero reaction, yet the actual slope of the Cournot reaction function is negative. What happens if we require the actual slope of the reaction function to be equal to the conjecture? Some economists argued that we could pin down the conjectures by a consistency condition, most notably Timothy Bresnahan in 1981. Bresnahan's consistency was a local condition that required the actual slope of the reaction function to be equal to the conjecture at the equilibrium outputs. With linear industry demand and quadratic costs, this gave rise to the result that the consistent conjecture depended on the slope of the marginal cost function: for example, with quadratic costs of the form (see below) cost = a.x2, the consistent conjecture is unique and determined by a. If a=0 then the unique consistent conjecture is the Bertrand conjecture , and as a get bigger, the consistent conjecture increases (becomes less negative) but is always less than zero for finite a.
The concept of consistent conjectures was criticized by several leading economists. Essentially, the concept of consistent conjectures was seen as not compatible with the standard models of rationality employed in Game theory.
However, in the 1990s Evolutionary game theory became fashionable in economics. It was realized that this approach could provide a foundation for the evolution of consistent conjectures. Huw Dixon and Ernesto Somma showed that we could treat the conjecture of a firm as a meme (the cultural equivalent of a gene). They showed that in the standard Cournot model, the consistent conjecture was the Evolutionarily stable strategy or ESS. As the authors argued, "Beliefs determine Behavior. Behavior determines payoff. From an evolutionary perspective, those types of behavior that lead to higher payoffs become more common." In the long run, firms with consistent conjectures would tend to earn bigger profits and come to predominate.
Mathematical example 1: Cournot model with CVs
Let there be two firms, X and Y, with outputs x and y. The market price P is given by the linear demand curve
so that the total revenue of firm X is then
For simplicity, let us follow Cournot's 1838 model and assume that there are no production costs, so that profits equal revenue .
With conjectural variations, the first order condition for the firm becomes:
where is the firms conjecture about how the other firm will respond, the conjectural variation or CV term. This first order optimization condition defines the reaction function for the firm, which states, for a given CV, the optimal choice of output given the other firm's output.
Note that the Cournot-Nash Conjecture is , in which case we have the standard Cournot Reaction function. The CV term serves to shift the reaction function and most importantly later its slope. To solve for a symmetric equilibrium, where both firms have the same CV, we simply note that the reaction function will pass through the x=y line so that:
so that in symmetric equilibrium and the equilibrium price is .
If we have the Cournot-Nash conjecture, , then we have the standard Cournot equilibrium with . However, if we have the Bertrand conjecture , then we obtain the perfectly competitive outcome with price equal to marginal cost (which is zero here). If we assume the joint-profit maximizing conjecture then both firms produce half of the monopoly output and the price is the monopoly price .
Hence the CV term is a simple behavioral parameter which enables us to represent a whole range of possible market outcomes from the competitive to the monopoly outcome, including the standard Cournot model.
Mathematical example 2: Consistency
Take the previous example. Now let the cost of production take the form: cost = a.x2. In this case, the profit function (revenue minus cost) becomes (for firm X and analogously for firm Y):
The first-order condition then becomes:
which defines the reaction function for firm X as:
This has slope (in output space)
and analogously for firm Y which (we assume) has the same conjecture. To see what consistency means, consider the simple Cournot conjecture with constant marginal cost a=0. In this case the slope of the reaction functions is −1/2 which is "inconsistent" with the conjecture. The Bresnehan consistency condition is that the conjectured slope equals the actual slope which means that
This is a quadratic equation which gives us the unique consistent conjecture
This is the positive root of the quadratic: the negative solution would be a conjecture more negative than −1 which would violate the second order conditions. As we can see from this example, when a=0 (marginal cost is horizontal), the Bertrand conjecture is consistent . As the steepness of marginal cost increases (a goes up), the consistent conjecture increases. Note that the consistent conjecture will always be less than 0 for any finite a.
Notes
External links
Conjectural variations and competition policy Office of Fair Trading Report, 2011.
Series on Mathematical Economics & Game Theory, Volume 2: Theory Of Conjectural Variations by Charles Figuières, Alain Jean-Marie, Nicolas Quérou, Mabel Tidball.
Game theory
Competition (economics)
Oligopoly
Market structure | Conjectural variation | [
"Mathematics"
] | 1,622 | [
"Game theory"
] |
27,996,623 | https://en.wikipedia.org/wiki/Transportation%20authority | A transportation authority or transportation agency is a government agency which regulates, manages, or administers transportation-related matters, such as roads, transportation infrastructure, traffic management, or traffic code.
Transportation authorities go by a number of names, such as "department of transportation" or "ministry of transport", among others. They often manage other government agencies that oversee specific fields of transportation, such as civil aviation authorities, highway authorities, logistics regulators, rail transport regulators, maritime transport regulators, and transportation safety boards.
Some transportation authorities, such as Greater Vancouver's Translink, have the power to impose excise taxes (fuel taxes) on gasoline, diesel fuel, and other motor fuels.
In North America, the term "transportation authority" is often used to refer to public transport agencies operating buses and rapid transit in metropolitan areas, otherwise referred to as transit districts or passenger transport executives. Examples of such public transport authorities (or public transit authorities) include the New York Metropolitan Transportation Authority, the Los Angeles County Metropolitan Transportation Authority, and the Toronto Area Transportation Operating Authority.
References
Public transport
Traffic management | Transportation authority | [
"Physics",
"Engineering"
] | 219 | [
"Systems engineering",
"Traffic management",
"Transport stubs",
"Physical systems",
"Transport"
] |
27,997,823 | https://en.wikipedia.org/wiki/Refined%20Bitumen%20Association | The Refined Bitumen Association is the trade association for UK bitumen companies.
History
It was formed in 1968.
Asphalt Industry Alliance
In 2000, it formed the Asphalt Industry Alliance with the Mineral Products Association, based in London. Asphalt is a mixture of bitumen and quarried mineral products, represented by both trade organisations.
Structure
Its five main members cover 95% of the UK market
ExxonMobil Bitumen, based in Leatherhead, Surrey
Nynas UK (a Swedish company), based in Eastham, Merseyside, Wirral, north of Ellesmere Port
Petroplus Bitumen (formerly BP Bitumen before Petroplus bought the Coryton Refinery in 2007), based in Llandarcy, Neath Port Talbot
Shell Bitumen, based in Wythenshawe
Total Bitumen UK Ltd, based in Ashton-on-Ribble in Preston, Lancashire
Function
It represents the UK bitumen industry at a national level. The UK produces around 1.5 million tonnes of bitumen a year. 90% of UK bitumen is used on roads.
References
External links
RBA
Eurobitume - based in Brussels
Asphalt Industry Alliance
Asphalt
Trade associations based in the United Kingdom
Organisations based in Harrogate
Organizations established in 1968
Oil and gas companies of the United Kingdom
Road construction | Refined Bitumen Association | [
"Physics",
"Chemistry",
"Engineering"
] | 263 | [
"Unsolved problems in physics",
"Construction",
"Road construction",
"Chemical mixtures",
"Asphalt",
"Amorphous solids"
] |
27,997,978 | https://en.wikipedia.org/wiki/Random-access%20Turing%20machine | Random-access Turing machines (RATMs) represent a pivotal computational model in theoretical computer science, especially critical in the study of tractability within big data computing scenarios. Diverging from the sequential memory access limitations of conventional Turing machines, RATMs introduce the capability for random access to memory positions. This advancement is not merely a technical enhancement but a fundamental shift in computational paradigms, aligning more closely with the memory access patterns of modern computing systems. The inherent ability of RATMs to access any memory cell in a constant amount of time significantly bolsters computational efficiency, particularly for problem sets where data size and access speed are critical factors. Furthermore, the conceptual evolution of RATMs from traditional Turing machines marks a significant leap in the understanding of computational processes, providing a more realistic framework for analyzing algorithms that handle the complexities of large-scale data. This transition from a sequential to a random-access paradigm not only mirrors the advancements in real-world computing systems but also underscores the growing relevance of RATMs in addressing the challenges posed by big data applications.
Definition
The random-access Turing machine is characterized chiefly by its capacity for direct memory access. This attribute, a stark deviation from the sequential memory access inherent to standard Turing machines, allows RATMs to access any memory cell in a consistent and time-efficient manner. Notably, this characteristic of RATMs echoes the operation of contemporary computing systems featuring random-access memory (RAM). The formal model of RATMs introduces a novel aspect where the execution time of an instruction is contingent upon the size of the numbers involved, effectively bridging the gap between abstract computation models and real-world computational requirements.
Additionally, the complexity and computational capacity of RATMs provide a robust framework for understanding the intricate mechanics of computational theory. This model has been expanded to include both discrete and real-valued arithmetic operations, along with a finite precision test for real number comparisons. These extensions, including the universal random-access Turing machine (URATM), are testament to the ongoing exploration of universal computation within the landscape of theoretical computer science.
Operational efficiency
The operational efficiency of RATMs is a key aspect of their computational prowess, showcasing their ability to compute functions with a time complexity that directly corresponds to the size of the data being manipulated. This efficiency is underlined by the model's unique approach to execution time, where the time required for an instruction is determined by the size of the numbers involved. This feature is a significant advancement over conventional models, as it aligns more closely with the practicalities of modern computing, where data size and processing speed are critical.
The comparison of RATMs with other computational models reveals that functions computable on a RAM in time can be translated to a Turing Machine computation time of , and vice versa. This translation is indicative of the RATMs' robustness and versatility in handling a variety of computational tasks, particularly in large data scenarios. The random access capability of RATMs enhances data retrieval and manipulation processes, making them highly efficient for tasks where large datasets are involved. This efficiency is not just theoretical but has practical implications in the way algorithms are designed and executed in real-world computing environments.
Variants and extensions
The theoretical landscape of RATMs has been significantly broadened by the advent of various variants and extensions. One of the most notable extensions is the universal random-access Turing machine (URATM), which has been instrumental in validating the existence and efficiency of universal computation within the random-access framework. This variant not only bolsters the computational capacity of RATMs but also serves as a cornerstone for other theoretical investigations into computational complexity and universality.
Another groundbreaking advancement in this domain is the conceptualization of quantum random-access Turing machines (QRATMs). These machines integrate the principles of quantum computing with the RATM framework, leading to a model that is more aligned with the architecture of modern quantum computers. QRATMs leverage the peculiarities of quantum mechanics, such as superposition and entanglement, to achieve computational capabilities that surpass those of classical RATMs. This quantum extension opens up new avenues in complexity analysis, offering more understanding of computational problems in a quantum context. Specifically, QRATMs have shed light on the relationships between quantum computational models and their classical counterparts, providing insights into the bounds and capabilities of quantum computational efficiency.
Applications
RATMs have found substantial application in the realm of big data computing, where their unique operational features facilitate exploration of both tractability and complexity. The ability of RATMs to execute operations in a time-bounded manner and provide random memory access makes them suitable for handling the challenges inherent in big data scenarios.
A significant advancement in the application of RATMs lies in their role in redefining the concept of tractability in the context of big data. Traditional views on computational tractability, typically defined within the realm of polynomial time, are often inadequate for addressing the massive scale of big data. RATMs, by contrast, enable a more nuanced approach, adopting sublinear time as a new standard for identifying tractable problems in big data computing.
Moreover, the application of RATMs extends beyond just theoretical exploration; they provide a practical framework for developing algorithms and computational strategies tailored to the unique demands of big data problems. As big data continues to grow in both size and importance, the insights gained from studying RATMs have opened new avenues for research and practical applications in this field.
Computational complexity and time–space tradeoffs
The exploration of RATMs extends into the intricate domain of computational complexity and time–space tradeoffs, particularly in the context of nondeterministic computations.
A key focus in this realm is the analysis of the inherent tradeoffs between time and space when solving computationally intensive problems. For instance, it is observed that certain computational problems, such as satisfiability, cannot be solved on general-purpose random-access Turing machines within specific time and space constraints. This indicates that there is a distinct tradeoff between the time taken to compute a function and the memory space required to perform the computation effectively. Specifically, results have shown that satisfiability cannot be resolved on these machines in time and .
Additionally, the research explores how time–space tradeoffs affect nondeterministic linear time computations with RATMs, showing that certain problems solvable in nondeterministic linear time under specific space limits are infeasible in deterministic time and space constraints. This finding emphasizes the distinct computational behaviors of deterministic and nondeterministic models in RATMs, highlighting the need to consider time and space efficiency in algorithm design and computational theory.
Technical and Logical Foundations
The study of RATMs has been advanced through the exploration of deterministic polylogarithmic time and space and two-sorted logic, a concept explored in depth by recent research. This approach focuses on analyzing the efficiency and logical structure of RATMs, specifically how they can be optimized to perform computations in polynomial time with respect to the size of input data.
Deterministic polylogarithmic time and space
Deterministic polylogarithmic time and space in RATMs refer to a computational efficiency where the time and space required for computation grow at a polylogarithmic rate with the size of the input data. This concept is pivotal in understanding how RATMs can be optimized for handling large data sets efficiently. It hypothesizes that certain computations, which previously seemed infeasible in polynomial time, can be executed effectively within this framework.
Two-sorted logic
The use of two-sorted logic in the context of RATMs provides an approach to describing and analyzing computational processes. This framework involves distinguishing between two types of entities: numerical values and positions in data structures. By separating these entities, this approach allows for a more refined analysis of computational steps and the relationships between different parts of a data structure, such as arrays or lists. This methodology provides insights into the logical structure of algorithms, enabling a more precise understanding of their behavior. The application of two-sorted logic in RATMs significantly contributes to the field of descriptive complexity, enhancing our understanding of the nuances of computational efficiency and logic.
References
Complexity classes | Random-access Turing machine | [
"Mathematics",
"Engineering"
] | 1,656 | [
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic",
"Formal languages",
"Software engineering",
"Computability theory",
"Formal methods"
] |
27,999,806 | https://en.wikipedia.org/wiki/PLL%20multibit | A PLL multibit or multibit PLL is a phase-locked loop (PLL) which achieves improved performance compared to a unibit PLL by using more bits. Unibit PLLs use only the most significant bit (MSB) of each counter's output bus to measure the phase, while multibit PLLs use more bits. PLLs are an essential component in telecommunications.
Multibit PLLs achieve improved efficiency and performance: better utilization of the frequency spectrum, to serve more users at a higher quality of service (QoS), reduced RF transmit power, and reduced power consumption in cellular phones and other wireless devices.
Concepts
A phase-locked loop is an electronic component or system comprising a closed loop for controlling the phase of an oscillator while comparing it with the phase of an input or reference signal. An indirect frequency synthesizer uses a PLL. In an all-digital PLL, a voltage-controlled oscillator (VCO) is controlled using a digital, rather than analog, control signal. The phase detector gives a signal proportional to the phase difference between two signals; in a PLL, one signal is the reference, and the other is the output of the controlled oscillator (or a divider driven by the oscillator).
In a unibit phase-locked loop, the phase is measured using only one bit of the reference and output counters, the most significant bit (MSB). In a multibit phase-locked loop, the phase is measured using more than one bit of the reference and output counters, usually including the most significant bit.
Unibit PLL
In unibit PLLs, the output frequency is defined by the input frequency and the modulo count of the two counters. In each counter, only the most significant bit (MSB) is used. The other output lines of the counters are ignored; this is wasted information.
PLL structure and performance
A PLL includes a phase detector, filter and oscillator connected in a closed loop, so the oscillator frequency follows (equals) the input frequency. Although the average output frequency equals the input frequency, the oscillator's frequency fluctuates or vibrates about that average value. The closed loop operates to correct such frequency deviations; higher performance PLL reduces these fluctuations to lower values, however these deviations can never be stopped. See Control theory. Phase noise, spurious emission, and jitter are results of the above phenomena.
PLL synthesizer characteristics
PLL frequency synthesizers are widely used in modern telecommunications. For example, a cellular phone may include three to six PLLs.
The phase noise may interfere with other subscribers, to reduce their quality of service. The interference is mutual. If the noise is reduced, faster communications are possible, to increase the symbol rate using more complex modulation schemes - that is, transmitting more bits per sample.
Frequency settling time is the time it takes the PLL to hop to another frequency. Frequency hopping is used in GSM, and still more in modern systems.
In CDMA, frequency hopping achieves better performance than phase coding.
Fine frequency resolution is the capability of a PLL to generate closely spaced frequencies. For example, a cellular network may require a mobile phone to set its frequency at any of a plurality of values, spaced 30 kHz or 10 kHz.
The performance envelope of a PLL defines the interrelation between the above essential criteria of performance - for example improving the frequency resolution will result in a slower PLL and higher phase noise, etc.
The PLL Multibit expands the performance envelope of the PLL - it enables to achieve faster settling time together with fine frequency resolution and with lower phase noise.
Effects of unibit
As one progresses from the MSB toward the least significant bit (LSB), the frequency increases. For a binary counter, each next bit is at twice the frequency of the previous one. For modulo counters, the relationship is more complicated.
Only the MSB of the two counters are at the same frequency. The other bits in one counter have different frequencies from those in the other counter.
All the bits at the output of one counter, together, represent a digital bus. Thus, in a PLL frequency synthesizer there are two buses, one for the reference counter, the other for the output (or VCO) counter. In a uni-bit PLL, of the two digital buses, only one bit (line) of each is used. All the rest of the information is lost.
Complexity of PLL design
PLL design is an interdisciplinary task, difficult even for experts in PLLs. This - for the Unibit PLL, which is simpler than the Multibit PLL. The design should take into account:
[Control theory, closed loop system.
Radio frequency RF design - the oscillator, high frequency components
Analog circuits - loop filter
Digital circuits - counters, phase measurement
RFI/EMI, shielding, grounding
Statistics of noise and phase noise in electronic components and circuits.
Multibit PLL
Principle of operation
The above PLL uses more of the bits in the two counters. There is a difficult problem, of comparing signals at different frequencies, in two digital buses which count to a different final value.
Improved performance is possible by using the faster bits of the counters, taking into account the additional available information.
The operation of the PLL is further disrupted by overflow in the counters. This effect is only relevant in multibit PLLs; for Unibit PLL, there is only the one-bit signal MSB, therefore no overflow is possible.
Implementation
The additional degree of freedom in Multibit PLLs allows to adapt each PLL to specific requirements. This can be effectively implemented with programmable logic devices (PLD), for example those manufactured by Altera Corp. Altera provides both digital components and advanced design tools for using and programming the components.
Early multibit PLLs used a microprocessor, a microcontroller or DSP to close the loop in a smart implementation.
Benefits
A multibit PLL offers fine frequency resolution and fast frequency hopping, together with lower phase noise and lower power consumption.
It thus enhances the overall performance envelope of the PLL.
The loop bandwidth can be optimized for phase noise performance and/or frequency settling speed; it depends less on the frequency resolution.
Improving the PLL performance can make better use of the frequency spectrum and reduce transmit power. And indeed, PLL performance is being constantly improved.
References
Communication circuits
Radio electronics
Control theory
Integrated circuits
Digital signal processing | PLL multibit | [
"Mathematics",
"Technology",
"Engineering"
] | 1,350 | [
"Radio electronics",
"Telecommunications engineering",
"Computer engineering",
"Applied mathematics",
"Control theory",
"Communication circuits",
"Integrated circuits",
"Dynamical systems"
] |
28,000,935 | https://en.wikipedia.org/wiki/Intubation | Intubation (sometimes entubation) is a medical procedure involving the insertion of a tube into the body. Patients are generally anesthetized beforehand. Examples include tracheal intubation, and the balloon tamponade with a Sengstaken–Blakemore tube (a tube into the gastrointestinal tract).
See also
Catheterization
Nasogastric intubation
Tracheal intubation
ROTIGS
References
Airway management
Emergency medical procedures
Medical equipment
Routes of administration | Intubation | [
"Chemistry",
"Biology"
] | 104 | [
"Pharmacology",
"Medical technology",
"Medical equipment",
"Routes of administration"
] |
4,000,598 | https://en.wikipedia.org/wiki/Polycyclic%20group | In mathematics, a polycyclic group is a solvable group that satisfies the maximal condition on subgroups (that is, every subgroup is finitely generated). Polycyclic groups are finitely presented, which makes them interesting from a computational point of view.
Terminology
Equivalently, a group G is polycyclic if and only if it admits a subnormal series with cyclic factors, that is a finite set of subgroups, let's say G0, ..., Gn such that
Gn coincides with G
G0 is the trivial subgroup
Gi is a normal subgroup of Gi+1 (for every i between 0 and n - 1)
and the quotient group Gi+1 / Gi is a cyclic group (for every i between 0 and n - 1)
A metacyclic group is a polycyclic group with n ≤ 2, or in other words an extension of a cyclic group by a cyclic group.
Examples
Examples of polycyclic groups include finitely generated abelian groups, finitely generated nilpotent groups, and finite solvable groups. Anatoly Maltsev proved that solvable subgroups of the integer general linear group are polycyclic; and later Louis Auslander (1967) and Swan proved the converse, that any polycyclic group is up to isomorphism a group of integer matrices. The holomorph of a polycyclic group is also such a group of integer matrices.
Strongly polycyclic groups
A polycyclic group G is said to be strongly polycyclic if each quotient Gi+1 / Gi is infinite. Any subgroup of a strongly polycyclic group is strongly polycyclic.
Polycyclic-by-finite groups
A virtually polycyclic group is a group that has a polycyclic subgroup of finite index, an example of a virtual property. Such a group necessarily has a normal polycyclic subgroup of finite index, and therefore such groups are also called polycyclic-by-finite groups. Although polycyclic-by-finite groups need not be solvable, they still have many of the finiteness properties of polycyclic groups; for example, they satisfy the maximal condition, and they are finitely presented and residually finite.
In the textbook and some papers, an M-group refers to what is now called a polycyclic-by-finite group, which by Hirsch's theorem can also be expressed as a group which has a finite length subnormal series with each factor a finite group or an infinite cyclic group.
These groups are particularly interesting because they are the only known examples of Noetherian group rings , or group rings of finite injective dimension.
Hirsch length
The Hirsch length or Hirsch number of a polycyclic group G is the number of infinite factors in its subnormal series.
If G is a polycyclic-by-finite group, then the Hirsch length of G is the Hirsch length of a polycyclic normal subgroup H of G, where H has finite index in G. This is independent of choice of subgroup, as all such subgroups will have the same Hirsch length.
See also
Group theory
Supersolvable group
References
Notes
Properties of groups
Solvable groups | Polycyclic group | [
"Mathematics"
] | 684 | [
"Mathematical structures",
"Algebraic structures",
"Properties of groups"
] |
4,001,513 | https://en.wikipedia.org/wiki/Electrometallurgy | Electrometallurgy is a method in metallurgy that uses electrical energy to produce metals by electrolysis. It is usually the last stage in metal production and is therefore preceded by pyrometallurgical or hydrometallurgical operations. The electrolysis can be done on a molten metal oxide (smelt electrolysis) which is used for example to produce aluminium from aluminium oxide via the Hall-Hérault process. Electrolysis can be used as a final refining stage in pyrometallurgical metal production (electrorefining) and it is also used for reduction of a metal from an aqueous metal salt solution produced by hydrometallurgy (electrowinning).
Processes
Electrometallurgy is the field concerned with the processes of metal electrodeposition. There are seven categories of these processes:
Electrolysis
Electrowinning, the extraction of metal from ores
Electrorefining, the purification of metals. Metal powder production by electrodeposition is included in this category, or sometimes electrowinning, or a separate category depending on application.
Electroplating, the deposition of a layer of one metal on another
Electroforming, the manufacture of, usually thin, metal parts through electroplating
Electropolishing, the removal of material from a metallic workpiece
Etching, industrially known to Wikipedia as chemical milling
Research trends
Molten Oxide Electrolysis
Molten Oxide Electrolysis in steelmaking is utilizing electrons as the reducing agent instead of coke as in conventional blast furnace. For steel production, this method uses an inert anode (Carbon, Platinum, Iridium or Chromium-based alloy) and places iron ore in the cathode. The electrochemical reaction in this Molten Oxide cell can reach up to 1600 °C, a temperature that melts iron ore and electrolyte oxide. Then the molten iron ore decompose following this reaction.
Fe2O3(l) + e- -> Fe(l) + O2(g)
The electrolysis reaction will produce molten pure iron as a main product and oxygen as its by-product. Because this process does not add coke in the process, no CO2 gas is produced. So no direct greenhouse gas emission. Moreover, if the electricity to run such cells comes from renewable sources, this process may have zero emissions. This technology also can be implemented for producing Nickel, Chromium, and Ferrochromium.
Currently Massachusetts-based Boston Metal company is in a process to scale up this technology to an industrial level.
Direct Decarburization Electrorefining
The purpose of this method is to reduce carbon content from steel. This process is suitable for secondary steelmaking industry which recycling steel scrap that has variety of carbon content in their feedstock. This method aim to replace current conventional method that utilizing Basic Oxygen Furnace (BOF) to reduce carbon content of iron by blowing oxygen to make it react with carbon and forming CO2.
In electrorefining, decarburization process happened in electrochemical cell that composed of inert electrode, slag and steel. During the process, current passing through the cell and made slag and steel melted. Oxygen ion from slag decompose and oxidize carbon on steel and to form CO. That decarburizing reaction is occurred in three steps as follow. (ads) means adsorbed intermediate
O2- + C <=> C(O^{-}_{(ads)}) +e-
C(O^{-}_{(ads)}) <=> C(O^{}_{(ads)})+ e-
C(O^{}_{(ads)}) <=> CO_(g)
The total reaction from this cell is following this scheme
C {}+ 1/2SiO2 <=> CO_{(g)}{} + 1/2Si_(l)
The SiO2 is come from the slag, based on the reaction above, beside producing CO gas, this method also producing pure silicon (depending on the slag). The benefit of this direct decarburization process is it does not produce CO2 but CO which is not considered as greenhouse gas.
References
Chemical processes
Electrolysis
Metallurgical processes | Electrometallurgy | [
"Chemistry",
"Materials_science"
] | 878 | [
"Metallurgical processes",
"Metallurgy",
"Chemical processes",
"Electrochemistry",
"nan",
"Electrolysis",
"Chemical process engineering"
] |
4,002,131 | https://en.wikipedia.org/wiki/Uranyl%20peroxide | Uranyl peroxide or uranium peroxide hydrate (UO4·nH2O) is a pale-yellow, soluble peroxide of uranium. It is found to be present at one stage of the enriched uranium fuel cycle and in yellowcake prepared via the in situ leaching and resin ion exchange system. This compound, also expressed as UO3·(H2O2)·(H2O), is very similar to uranium trioxide hydrate UO3·nH2O. The dissolution behaviour of both compounds are very sensitive to the hydration state (n can vary between 0 and 4). One main characteristic of uranium peroxide is that it consists of small needles with an average AMAD of about 1.1 μm.
The uranyl minerals studtite, UO4·4H2O, and metastudtite, UO4·2H2O, are the only minerals discovered to date found to contain peroxide. The product is a light yellow powder.
Synthesis
In general, uranyl peroxide can be obtained from a solution of uranium(VI) by adding a peroxide, usually hydrogen peroxide solution. The dihydrate is obtained from a boiling solution of uranyl nitrate with the addition of hydrogen peroxide and drying of the precipitate, while the trihydrate is precipitated from a solution of ammonium uranyl oxalate.
Crystal structure
The unit cell consists of uranyl cations coordinated to two water molecules and two peroxide anions. The latter are μ2-coordinated to the cation—that is, end-on. Additional water molecules are bound in the crystal by hydrogen bonding. Only the tetrahydrate has been characterized by X-ray crystallography, but density functional theory offers a good approximation to the dihydrate.
Polyperoxouranylate allotrope
When uranyl nitrate is dissolved in an aqueous solution of hydrogen peroxide and an alkali metal hydroxide, it forms cage clusters akin to polyoxometalates or fullerenes. Syntheses also typically add organic materials, such as amines, to serve as templates, akin to zeolites.
Applications
Radiolysis of uranium salts dissolved in water produces peroxides; uranyl peroxide has been studied as a possible end component of spent radioactive waste.
References
Some Chemistry of Uranium
Uranyl compounds
Peroxides
Nuclear materials
Oxidizing agents | Uranyl peroxide | [
"Physics",
"Chemistry"
] | 511 | [
"Redox",
"Oxidizing agents",
"Materials",
"Nuclear materials",
"Matter"
] |
4,003,614 | https://en.wikipedia.org/wiki/Multiscale%20modeling | Multiscale modeling or multiscale mathematics is the field of solving problems that have important features at multiple scales of time and/or space. Important problems include multiscale modeling of fluids, solids, polymers, proteins, nucleic acids as well as various physical and chemical phenomena (like adsorption, chemical reactions, diffusion).
An example of such problems involve the Navier–Stokes equations for incompressible fluid flow.
In a wide variety of applications, the stress tensor is given as a linear function of the gradient . Such a choice for has been proven to be sufficient for describing the dynamics of a broad range of fluids. However, its use for more complex fluids such as polymers is dubious. In such a case, it may be necessary to use multiscale modeling to accurately model the system such that the stress tensor can be extracted without requiring the computational cost of a full microscale simulation.
History
Horstemeyer 2009, 2012 presented a historical review of the different disciplines (mathematics, physics, and materials science) for solid materials related to multiscale materials modeling.
The aforementioned DOE multiscale modeling efforts were hierarchical in nature. The first concurrent multiscale model occurred when Michael Ortiz (Caltech) took the molecular dynamics code Dynamo, developed by Mike Baskes at Sandia National Labs, and with his students embedded it into a finite element code for the first time. Martin Karplus, Michael Levitt, and Arieh Warshel received the Nobel Prize in Chemistry in 2013 for the development of a multiscale model method using both classical and quantum mechanical theory which were used to model large complex chemical systems and reactions.
Areas of research
In physics and chemistry, multiscale modeling is aimed at the calculation of material properties or system behavior on one level using information or models from different levels. On each level, particular approaches are used for the description of a system. The following levels are usually distinguished: level of quantum mechanical models (information about electrons is included), level of molecular dynamics models (information about individual atoms is included), coarse-grained models (information about atoms and/or groups of atoms is included), mesoscale or nano-level (information about large groups of atoms and/or molecule positions is included), level of continuum models, level of device models. Each level addresses a phenomenon over a specific window of length and time. Multiscale modeling is particularly important in integrated computational materials engineering since it allows the prediction of material properties or system behavior based on knowledge of the process-structure-property relationships.
In operations research, multiscale modeling addresses challenges for decision-makers that come from multiscale phenomena across organizational, temporal, and spatial scales. This theory fuses decision theory and multiscale mathematics and is referred to as multiscale decision-making. Multiscale decision-making draws upon the analogies between physical systems and complex man-made systems.
In meteorology, multiscale modeling is the modeling of the interaction between weather systems of different spatial and temporal scales that produces the weather that we experience. The most challenging task is to model the way through which the weather systems interact as models cannot see beyond the limit of the model grid size. In other words, to run an atmospheric model that is having a grid size (very small ~ ) which can see each possible cloud structure for the whole globe is computationally very expensive. On the other hand, a computationally feasible Global climate model (GCM), with grid size ~ , cannot see the smaller cloud systems. So we need to come to a balance point so that the model becomes computationally feasible and at the same time we do not lose much information, with the help of making some rational guesses, a process called parametrization.
Besides the many specific applications, one area of research is methods for the accurate and efficient solution of multiscale modeling problems. The primary areas of mathematical and algorithmic development include:
Analytical modeling
Center manifold and slow manifold theory
Continuum modeling
Discrete modeling
Network-based modeling
Statistical modeling
See also
Computational mechanics
Equation-free modeling
Integrated computational materials engineering
Multilevel model
Multiphysics
Multiresolution analysis
Space mapping
References
Further reading
External links
Mississippi State University ICME Cyberinfrastructure
Multiscale Modeling of Flow Flow
Multiscale Modeling of Materials (MMM-Tools) Project at Dr. Martin Steinhauser's group at the Fraunhofer-Institute for High-Speed Dynamics, Ernst-Mach-Institut, EMI, at Freiburg, Germany. Since 2013, M.O. Steinhauser is associated at the University of Basel, Switzerland.
Multiscale Modeling Group: Institute of Physical & Theoretical Chemistry, University of Regensburg, Regensburg, Germany
Multiscale Materials Modeling: Fourth International Conference, Tallahassee, FL, USA
Multiscale Modeling Tools for Protein Structure Prediction and Protein Folding Simulations, Warsaw, Poland
Multiscale modeling for Materials Engineering: Set-up of quantitative micromechanical models
Multiscale Material Modelling on High Performance Computer Architectures, MMM@HPC project
Modeling Materials: Continuum, Atomistic and Multiscale Techniques (E. B. Tadmor and R. E. Miller, Cambridge University Press, 2011)
An Introduction to Computational Multiphysics II: Theoretical Background Part I Harvard University video series
SIAM Journal of Multiscale Modeling and Simulation
International Journal for Multiscale Computational Engineering
Department of Energy Summer School on Multiscale Mathematics and High Performance Computing
Multiscale Conceptual Model Figures for Biological and Environmental Science
Computational physics
Mathematical modeling | Multiscale modeling | [
"Physics",
"Mathematics"
] | 1,121 | [
"Applied mathematics",
"Mathematical modeling",
"Computational physics"
] |
4,003,760 | https://en.wikipedia.org/wiki/Ammonium%20diuranate | Ammonium diuranate or (ADU) ((NH4)2U2O7), is one of the intermediate chemical forms of uranium produced during yellowcake production. The name "yellowcake" originally given to this bright yellow salt, now applies to mixtures of uranium oxides which are actually hardly ever yellow. It also is an intermediate in mixed-oxide (MOX) fuel fabrication. Although it is usually called "ammonium diuranate" as though it has a "diuranate" ion , this is not necessarily the case. It can also be called diammonium diuranium heptaoxide. The structure was theorized to be similar to that of uranium trioxide dihydrate. Recent literature has shown that the structure more closely resembles the mineral metaschoepite, the partially dehydrated form of schoepite.
It is precipitated by adding aqueous ammonium hydroxide after uranium extraction by tertiary amines in kerosene. This precipitate is then thickened and centrifuged before being calcined to uranium oxide. Canadian practice favours the production of uranium oxide from ammonium diuranate, rather than from uranyl nitrate as is the case elsewhere.
Ammonium diuranate was once used to produce colored glazes in ceramics. However, when being fired this will decompose to uranium oxide, so the uranate was only used as a lower-cost material than the fully purified uranium oxide.
References
Uranates
Nuclear materials
Ammonium compounds | Ammonium diuranate | [
"Physics",
"Chemistry"
] | 322 | [
"Salts",
"Materials",
"Nuclear materials",
"Ammonium compounds",
"Matter"
] |
43,103,430 | https://en.wikipedia.org/wiki/Principalization%20%28algebra%29 | In the mathematical field of algebraic number theory, the concept of principalization refers to a situation when, given an extension of algebraic number fields, some ideal (or more generally fractional ideal) of the ring of integers of the smaller field isn't principal but its extension to the ring of integers of the larger field is. Its study has origins in the work of Ernst Kummer on ideal numbers from the 1840s, who in particular proved that for every algebraic number field there exists an extension number field such that all ideals of the ring of integers of the base field (which can always be generated by at most two elements) become principal when extended to the larger field. In 1897 David Hilbert conjectured that the maximal abelian unramified extension of the base field, which was later called the Hilbert class field of the given base field, is such an extension. This conjecture, now known as principal ideal theorem, was proved by Philipp Furtwängler in 1930 after it had been translated from number theory to group theory by Emil Artin in 1929, who made use of his general reciprocity law to establish the reformulation. Since this long desired proof was achieved by means of Artin transfers of non-abelian groups with derived length two, several investigators tried to exploit the theory of such groups further to obtain additional information on the principalization in intermediate fields between the base field and its Hilbert class field. The first contributions in this direction are due to Arnold Scholz and Olga Taussky in 1934, who coined the synonym capitulation for principalization. Another independent access to the principalization problem via Galois cohomology of unit groups is also due to Hilbert and goes back to the chapter on cyclic extensions of number fields of prime degree in his number report, which culminates in the famous Theorem 94.
Extension of classes
Let be an algebraic number field, called the base field, and let be a field extension of finite degree. Let and denote the ring of integers, the group of nonzero fractional ideals and its subgroup of principal fractional ideals of the fields respectively. Then the extension map of fractional ideals
is an injective group homomorphism. Since , this map induces the extension homomorphism of ideal class groups
If there exists a non-principal ideal (i.e. ) whose extension ideal in is principal (i.e. for some and ), then we speak about principalization or capitulation in . In this case, the ideal and its class are said to principalize or capitulate in . This phenomenon is described most conveniently by the principalization kernel or capitulation kernel, that is the kernel of the class extension homomorphism.
More generally, let be a modulus in , where is a nonzero ideal in and is a formal product of pair-wise different real infinite primes of . Then
is the ray modulo , where is the group of nonzero fractional ideals in relatively prime to and the condition means and for every real infinite prime dividing Let then the group is called a generalized ideal class group for If and are generalized ideal class groups such that for every and for every , then induces the extension homomorphism of generalized ideal class groups:
Galois extensions of number fields
Let be a Galois extension of algebraic number fields with Galois group and let denote the set of prime ideals of the fields respectively. Suppose that is a prime ideal of which does not divide the relative discriminant , and is therefore unramified in , and let be a prime ideal of lying over .
Frobenius automorphism
There exists a unique automorphism such that for all algebraic integers , where is the norm of . The map is called the Frobenius automorphism of . It generates the decomposition group of and its order is equal to the inertia degree of over . (If is ramified then is only defined and generates modulo the inertia subgroup
whose order is the ramification index of over ). Any other prime ideal of dividing is of the form with some . Its Frobenius automorphism is given by
since
for all , and thus its decomposition group is conjugate to . In this general situation, the Artin symbol is a mapping
which associates an entire conjugacy class of automorphisms to any unramified prime ideal , and we have if and only if splits completely in .
Factorization of prime ideals
When is an intermediate field with relative Galois group , more precise statements about the homomorphisms and are possible because we can construct the factorization of (where is unramified in as above) in from its factorization in as follows. Prime ideals in lying over are in -equivariant bijection with the -set of left cosets , where corresponds to the coset . For every prime ideal in lying over the Galois group acts transitively on the set of prime ideals in lying over , thus such ideals are in bijection with the orbits of the action of on by left multiplication. Such orbits are in turn in bijection with the double cosets . Let be a complete system of representatives of these double cosets, thus . Furthermore, let denote the orbit of the coset in the action of on the set of left cosets by left multiplication and let denote the orbit of the coset in the action of on the set of right cosets by right multiplication. Then factorizes in as , where for are the prime ideals lying over in satisfying with the product running over any system of representatives of .
We have
Let be the decomposition group of over . Then is the stabilizer of in the action of on , so by the orbit-stabilizer theorem we have . On the other hand, it's , which together gives
In other words, the inertia degree is equal to the size of the orbit of the coset in the action of on the set of right cosets by right multiplication. By taking inverses, this is equal to the size of the orbit of the coset in the action of on the set of left cosets by left multiplication. Also the prime ideals in lying over correspond to the orbits of this action.
Consequently, the ideal embedding is given by , and the class extension by
Artin's reciprocity law
Now further assume is an abelian extension, that is, is an abelian group. Then, all conjugate decomposition groups of prime ideals of lying over coincide, thus for every , and the Artin symbol becomes equal to the Frobenius automorphism of any and for all and every .
By class field theory,
the abelian extension uniquely corresponds to an intermediate group between the ray modulo of and , where denotes the relative conductor ( is divisible by the same prime ideals as ). The Artin symbol
which associates the Frobenius automorphism of to each prime ideal of which is unramified in , can be extended by multiplicativity to a surjective homomorphism
with kernel (where means ), called Artin map, which induces isomorphism
of the generalized ideal class group to the Galois group . This explicit isomorphism is called the Artin reciprocity law or general reciprocity law.
Group-theoretic formulation of the problem
This reciprocity law allowed Artin to translate the general principalization problem for number fields based on the following scenario from number theory to group theory. Let be a Galois extension of algebraic number fields with automorphism group . Assume that is an intermediate field with relative group and let be the maximal abelian subextension of respectively within . Then the corresponding relative groups are the commutator subgroups , resp. . By class field theory, there exist intermediate groups and such that the Artin maps establish isomorphisms
Here means and are some moduli divisible by respectively and by all primes dividing respectively.
The ideal extension homomorphism , the induced Artin transfer and these Artin maps are connected by the formula
Since is generated by the prime ideals of which does not divide , it's enough to verify this equality on these generators. Hence suppose that is a prime ideal of which does not divide and let be a prime ideal of lying over . On the one hand, the ideal extension homomorphism maps the ideal of the base field to the extension ideal in the field , and the Artin map of the field maps this product of prime ideals to the product of conjugates of Frobenius automorphisms
where the double coset decomposition and its representatives used here is the same as in the last but one section. On the other hand, the Artin map of the base field maps the ideal to the Frobenius automorphism . The -tuple is a system of representatives of double cosets , which correspond to the orbits of the action of on the set of left cosets by left multiplication, and is equal to the size of the orbit of coset in this action. Hence the induced Artin transfer maps to the product
This product expression was the original form of the Artin transfer homomorphism, corresponding to a decomposition of the permutation representation into disjoint cycles.
Since the kernels of the Artin maps and are and respectively, the previous formula implies that . It follows that there is the class extension homomorphism and that and the induced Artin transfer are connected by the commutative diagram in Figure 1 via the isomorphisms induced by the Artin maps, that is, we have equality of two composita .
Class field tower
The commutative diagram in the previous section, which connects the number theoretic class extension homomorphism with the group theoretic Artin transfer , enabled Furtwängler to prove the principal ideal theorem by specializing to the situation that is the (first) Hilbert class field of , that is the maximal abelian unramified extension of , and is the second Hilbert class field of , that is the maximal metabelian unramified extension of (and maximal abelian unramified extension of ). Then and is the commutator subgroup of . More precisely, Furtwängler showed that generally the Artin transfer from a finite metabelian group to its derived subgroup is a trivial homomorphism. In fact this is true even if isn't metabelian because we can reduce to the metabelian case by replacing with . It also holds for infinite groups provided is finitely generated and . It follows that every ideal of extends to a principal ideal of .
However, the commutative diagram comprises the potential for a lot of more sophisticated applications. In the situation that is a prime number, is the second Hilbert p-class field of , that is the maximal metabelian unramified extension of of degree a power of varies over the intermediate field between and its first Hilbert p-class field , and correspondingly varies over the intermediate groups between and , computation of all principalization kernels and all p-class groups translates to information on the kernels and targets of the Artin transfers and permits the exact specification of the second p-class group of via pattern recognition, and frequently even allows to draw conclusions about the entire p-class field tower of , that is the Galois group of the maximal unramified pro-p extension of .
These ideas are explicit in the paper of 1934 by A. Scholz and O. Taussky already. At these early stages, pattern recognition consisted of specifying the annihilator ideals, or symbolic orders, and the Schreier relations of metabelian p-groups and subsequently using a uniqueness theorem on group extensions by O. Schreier.
Nowadays, we use the p-group generation algorithm of M. F. Newman
and E. A. O'Brien
for constructing descendant trees of p-groups and searching patterns, defined by kernels and targets of Artin transfers, among the vertices of these trees.
Galois cohomology
In the chapter on cyclic extensions of number fields of prime degree of his number report from 1897, D. Hilbert
proves a series of crucial theorems which culminate in Theorem 94, the original germ of class field theory. Today, these theorems can be viewed as the beginning of what is now called Galois cohomology. Hilbert considers a finite relative extension of algebraic number fields with cyclic Galois group generated by an automorphism such that for the relative degree , which is assumed to be an odd prime.
He investigates two endomorphism of the unit group of the extension field, viewed as a Galois module with respect to the group , briefly a -module. The first endomorphism
is the symbolic exponentiation with the difference , and the second endomorphism
is the algebraic norm mapping, that is the symbolic exponentiation with the trace
In fact, the image of the algebraic norm map is contained in the unit group of the base field and coincides with the usual arithmetic (field) norm as the product of all conjugates. The composita of the endomorphisms satisfy the relations and .
Two important cohomology groups can be defined by means of the kernels and images of these endomorphisms. The zeroth Tate cohomology group of in is given by the quotient consisting of the norm residues of , and the minus first Tate cohomology group of in is given by the quotient of the group of relative units of modulo the subgroup of symbolic powers of units with formal exponent .
In his Theorem 92 Hilbert proves the existence of a relative unit which cannot be expressed as , for any unit , which means that the minus first cohomology group is non-trivial of order divisible by . However, with the aid of a completely similar construction, the minus first cohomology group of the -module , the multiplicative group of the superfield , can be defined, and Hilbert shows its triviality in his famous Theorem 90.
Eventually, Hilbert is in the position to state his celebrated Theorem 94: If is a cyclic extension of number fields of odd prime degree with trivial relative discriminant , which means it's unramified at finite primes, then there exists a non-principal ideal of the base field which becomes principal in the extension field , that is for some . Furthermore, the th power of this non-principal ideal is principal in the base field , in particular , hence the class number of the base field must be divisible by and the extension field can be called a class field of . The proof goes as follows: Theorem 92 says there exists unit , then Theorem 90 ensures the existence of a (necessarily non-unit) such that , i. e., . By multiplying by proper integer if necessary we may assume that is an algebraic integer. The non-unit is generator of an ambiguous principal ideal of , since . However, the underlying ideal of the subfield cannot be principal. Assume to the contrary that for some . Since is unramified, every ambiguous ideal of is a lift of some ideal in , in particular . Hence and thus for some unit . This would imply the contradiction because . On the other hand,
thus is principal in the base field already.
Theorems 92 and 94 don't hold as stated for , with the fields and being a counterexample (in this particular case is the narrow Hilbert class field of ). The reason is Hilbert only considers ramification at finite primes but not at infinite primes (we say that a real infinite prime of ramifies in if there exists non-real extension of this prime to ). This doesn't make a difference when is odd since the extension is then unramified at infinite primes. However he notes that Theorems 92 and 94 hold for provided we further assume that number of fields conjugate to that are real is twice the number of real fields conjugate to . This condition is equivalent to being unramified at infinite primes, so Theorem 94 holds for all primes if we assume that is unramified everywhere.
Theorem 94 implies the simple inequality for the order of the principalization kernel of the extension . However an exact formula for the order of this kernel can be derived for cyclic unramified (including infinite primes) extension (not necessarily of prime degree) by means of the Herbrand quotient of the -module , which is given by
It can be shown that (without calculating the order of either of the cohomology groups). Since the extension is unramified, it's so . With the aid of K. Iwasawa's isomorphism
, specialized to a cyclic extension with periodic cohomology of length , we obtain
This relation increases the lower bound by the factor , the so-called unit norm index.
History
As mentioned in the lead section, several investigators tried to generalize the Hilbert-Artin-Furtwängler principal ideal theorem of 1930 to questions concerning the principalization in intermediate extensions between the base field and its Hilbert class field. On the one hand, they established general theorems on the principalization over arbitrary number fields, such as Ph. Furtwängler 1932,
O. Taussky 1932,
O. Taussky 1970,
and H. Kisilevsky 1970.
On the other hand, they searched for concrete numerical examples of principalization in unramified cyclic extensions of particular kinds of base fields.
Quadratic fields
The principalization of -classes of imaginary quadratic fields with -class rank two in unramified cyclic cubic extensions was calculated manually for three discriminants by A. Scholz and O. Taussky
in 1934. Since these calculations require composition of binary quadratic forms and explicit knowledge of fundamental systems of units in cubic number fields, which was a very difficult task in 1934, the investigations stayed at rest for half a century until F.-P. Heider and B. Schmithals
employed the CDC Cyber 76 computer at the University of Cologne to extend the information concerning principalization to the range containing relevant discriminants in 1982,
thereby providing the first analysis of five real quadratic fields.
Two years later, J. R. Brink
computed the principalization types of complex quadratic fields.
Currently, the most extensive computation of principalization data for all quadratic fields with discriminants and -class group of type is due to D. C. Mayer in 2010,
who used his recently discovered connection between transfer kernels and transfer targets for the design of a new principalization algorithm.
The -principalization in unramified quadratic extensions of imaginary quadratic fields with -class group of type was studied by H. Kisilevsky in 1976.
Similar investigations of real quadratic fields were carried out by E. Benjamin and C. Snyder in 1995.
Cubic fields
The -principalization in unramified quadratic extensions of cyclic cubic fields with -class group of type was investigated by A. Derhem in 1988.
Seven years later, M. Ayadi studied the -principalization in unramified cyclic cubic extensions of cyclic cubic fields , , with -class group of type and conductor divisible by two or three primes.
Sextic fields
In 1992, M. C. Ismaili investigated the -principalization in unramified cyclic cubic extensions of the normal closure of pure cubic fields , in the case that this sextic number field , , has a -class group of type .
Quartic fields
In 1993, A. Azizi studied the -principalization in unramified quadratic extensions of biquadratic fields of Dirichlet type with -class group of type . Most recently, in 2014, A. Zekhnini extended the investigations to Dirichlet fields with -class group of type , thus providing the first examples of -principalization in the two layers of unramified quadratic and biquadratic extensions of quartic fields with class groups of -rank three.
See also
Both, the algebraic, group theoretic access to the principalization problem by Hilbert-Artin-Furtwängler and the arithmetic, cohomological access by Hilbert-Herbrand-Iwasawa are also presented in detail in the two bibles of capitulation by J.-F. Jaulent 1988 and by K. Miyake 1989.
Secondary sources
References
Group theory
Class field theory | Principalization (algebra) | [
"Mathematics"
] | 4,154 | [
"Group theory",
"Fields of abstract algebra"
] |
21,743,174 | https://en.wikipedia.org/wiki/Double%20scaling%20limit | In theoretical physics, a double scaling limit is a limit in which the coupling constant is sent to zero while another quantity is sent to zero or infinity at the same moment.
The adjective "double" is a kind of misnomer because the procedure represents an ordinary scaling. However, the adjective is meant to emphasize that two parameters are simultaneously approaching singular values.
The double scaling limit is often applied to matrix models, string theory, and other theories to obtain their simplified versions.
Theoretical physics | Double scaling limit | [
"Physics"
] | 98 | [
"Theoretical physics",
"Theoretical physics stubs"
] |
21,747,203 | https://en.wikipedia.org/wiki/German%20Renewable%20Energy%20Sources%20Act | The Renewable Energy Sources Act or EEG () is a series of German laws that originally provided a feed-in tariff (FIT) scheme to encourage the generation of renewable electricity. The specified the transition to an auction system for most technologies which has been finished with the current version EEG 2017.
The EEG first came into force on 1April 2000 and has been modified several times since. The original legislation guaranteed a grid connection, preferential dispatch, and a government-set feed-in tariff for 20years, dependent on the technology and size of project. The scheme was funded by a surcharge on electricity consumers, with electricity-intensive manufacturers and the railways later being required to contribute as little as 0.05¢/kWh. For 2017, the unabated EEG surcharge is . In a study in 2011, the average retail price of electricity in Germany, among the highest in the world, stood at around .
The EEG was preceded by the Electricity Feed-in Act (1991) which entered into force on 1January 1991. This law initiated the first green electricity feed-in tariff scheme in the world. The original EEG is credited with a rapid uptake of wind power and photovoltaics (PV) and is regarded nationally and internationally as an innovative and successful energy policy measure. The act also covers biomass (including cogeneration), hydroelectricity, and geothermal energy.
A significant revision to the EEG came into effect on 1August 2014. The prescribed feed-in tariffs should be gone for most technologies in the near future. Specific deployment corridors now stipulate the extent to which renewable electricity is to be expanded in the future and the funding rates are no longer set by the government, but are determined by auction. Plant operators market their production directly and receive a market premium to make up the difference between their bid price and the average monthly spot market price for electricity. The EEG surcharge remains in place to cover this shortfall. This new system was rolled out in stages, starting with ground-mounted photovoltaics in the 2014 law. More legislative revisions for the other branches were introduced with the current EEG on 1January 2017.
The current EEG has been criticized for setting the deployment corridors (see table) too low to meet Germany's long-term climate protection goals, particularly given the likely electrification of the transport sector. The government target for the share of renewables in power generation is at least 80% by 2050.
The controversial EEG surcharge (or levy) on consumer power bills was removed, effective 1July 2022. As a result, the average German household is expected to save around peryear. Payment obligations will now be met from proceeds from emissions trading and from the federal budget. Guaranteed tariffs for renewables project will continue to be offered going forward.
Background
The pioneer EEG (spanning 2001–2014) and its predecessor the Electricity Feed-in Act (1991) (spanning 1991–2001) class as feed-in tariff (FIT) schemes, a policy mechanism designed to accelerate the uptake of renewable energy technologies. The scheme offers long-term contracts to renewable energy producers, based on the cost of generation of the particular technology in question. In addition, a grid connection and preferential dispatch are also guaranteed. The tariffs (Einspeisevergütungen) themselves are funded by a levy or surcharge (EEG-Umlage) on electricity consumers, with electricity-intensive manufacturers being largely exempted. The EEG surcharge is based on the difference between the specified feed-in tariffs paid under the EEG and the sale of the renewable energy at the EEX energy exchange by the grid operators (also known as transmission system operators or TSO). , the TSOs comprise 50Hertz Transmission, Amprion, Tennet TSO, and TransnetBW.
Amendments to the original EEG added the concept of a market premium in 2012. And the use of deployment corridors and auctions to set the levels of uptake and remuneration, respectively, in 2014.
The EEG has generally been regarded as a success. The EEG (2000) led to the particularly rapid uptake of two renewable energy technologies: wind power and photovoltaics. The high growth of photovoltaics in Germany is set against its relatively poor solar resource. As the US NREL observed:
The share of electricity from renewable energy sources has risen dramatically since the introduction of the EEG in 2000. The average annual growth rate is around 9billion kWh and almost all of this increase is due to electricity generation that qualifies for EEG payments. The EEG is also responsible for 88.3 Mt eq of avoided emissions in 2014, thus making a significant contribution to Germany's climate protection targets. The following table summarizes the remarkable uptake of renewables and in particular photovoltaics:
Under the legislation, hydropower includes "wave, tidal, salinity gradient and marine current energy". The use of biomass for electricity generation has also grown as a result of the EEG. Biomass includes: "biogas, biomethane, landfill gas and sewage treatment gas and from the biologically degradable part of waste from households and industry". Mine gas is in a separate category.
Germany's national energy policy is set out in the government's Energy Concept released on 28September 2010. On 6June 2011, following Fukushima, the government removed the use of nuclear power as a bridging technology and reintroduced a nuclear phase-out. Boosting renewable electricity generation is an essential part of national policy (see table).
The EEG is also a key element in the implementation of EU Directive 2009/28/EC on the promotion of the use of energy from renewable sources. This directive requires Germany to produce 18% of its gross final energy consumption (including heat and transport) from renewable energy sources by 2020. In this endeavour, the EEG is complemented by the Renewable Energies Heat Act (Erneuerbare-Energien-Wärmegesetz or EEWärmeG). A chart overviewing German energy legislation in 2016 is available.
Legislation
The first discussions on feed-in tariffs in the German parliament began in the 1980s. The Association for the Promotion of Solar Power (SFV), Eurosolar, and the Federal Association of German Hydroelectric Power Plants (BDW) floated early concepts for a FIT scheme. The Economics Ministry and the CDU/CSU and FDP parties opposed non-market measures and argued for voluntary renewables quotas instead. In the late 1980s, CDU/CSU and Green politicians drafted a feed-in tariff bill and sought parliamentary and external support. The newly formed Environment Ministry backed the proposal. The incumbent electricity producers did not devote much effort to counter the bill because they believed its effects would be minimal and their lobby effort was preoccupied with the takeover of the East German electricity system following German reunification in 1989. The bill became the Electricity Feed-in Act (1991).
Prior to the Electricity Feed-in Act, operators of small power plants could only obtain access to the grid at the behest of the grid owners and were sometimes refused entirely. Remuneration was based on the averted costs faced by the energy utilities, yielding low rates and unattractive investment conditions. Government support for renewable electricity before the act was primarily through R&D programs administered by the Federal Ministry for Research and Technology (BMFT).
Electricity Feed-in Act (1991)
Germany first began promoting renewable electricity using feed-in tariffs with the Electricity Feed-in Act (Stromeinspeisungsgesetz or StrEG). The long title is the law on feeding electricity from renewable energy sources into the public grid. The law entered into force on 1January 1991. This legislation was the first green electricity feed-in tariff scheme in the world. The law obliged grid companies to connect all renewable power plants, to grant them priority dispatch, and pay them a guaranteed feed-in tariff over 20years.
While the Electricity Feed-in Act did much to promote wind power, the installed capacity of photovoltaic installations remained low (see table). The remuneration for photovoltaics was simply too little in most settings. Low-interest loans were then offered under additional government programs.
Beginning in 1998, the Electricity Feed-in Act was challenged under European Union anti-subsidy rules by PreussenElektra (an E.ON predecessor). The European Court of Justice (ECJ) found that the arrangements did not constitute state aid. The court concluded:
The Electricity Feed-in Act suffered from structural flaws. First, the coupling of feed-in tariffs to the electricity price proved too volatile to ensure investment security. Second, the distribution of burdens was uneven, with grid operators in high-wind regions having to pay out more. In light of this latter concern, the act was amended in 1998 to introduce, among other things, a double 5% cap on feed-in purchases. This ceiling slowed uptake in some regions.
The Electricity Feed-in Act was enacted by a CDU/CSU/FDP coalition government.
Renewable Energy Sources Act (2000)
The Electricity Feed-in Act was replaced by the Renewable Energy Sources Act (2000), also known as the EEG (2000), and came into force on 1April 2000. The legislation is available in English. The long title is an act on granting priority to renewable energy sources. The three principles of the act are:
Investment protection through guaranteed feed-in tariffs and connection requirement. Every kilowatt-hour generated from a renewable electricity facility receives a confirmed technology-specific feed-in tariff for 20years. Grid operators are required to preferentially dispatch this electricity over electricity from conventional sources like nuclear power, coal, and gas. As a result, small and medium enterprises were given new access to the electricity system, along with energy cooperatives (Genossenschaft), farmers, and households.
No charge to German public finances. The remuneration payments are not considered public subsidies since they are not derived from taxation but rather through an EEG surcharge on electricity consumers. In 2015, the aggregate EEG surcharge totalled €21.8billion and the EEG surcharge itself was 6.17¢/kWh. The EEG surcharge can be substantially reduced for electricity-intensive industries under the 'special equalization scheme' (Besonderen Ausgleichsregelung) (introduced in a 2003 amendment).
Innovation by decreasing feed-in-tariffs. The feed-in tariffs decrease at regular intervals to exert a downwards cost pressure on plant operators and technology manufacturers. This decrease, known as a 'degression', applies to new installations. It is expected that technologies become more cost efficient with time and the legislation captures this view:
Unlike the preceding Electricity Feed-in Act, feed-in tariffs were now specified in absolute terms and no longer tied to the prevailing electricity price. The tariffs also differentiated between scale (larger plants received less) and electricity yield (wind turbines in low-wind areas received more). The new tariffs were based on cost-recovery plus profit and increased substantially. For instance, photovoltaic remuneration rose from 8.5¢/kWh to a maximum of 51¢/kWh. Offshore wind, geothermal energy, and mine gas were included for the first time. The concept of an annual reduction or 'degression' was introduced, with annual degression rates varying between 1% for biomass and 5% for photovoltaics. Photovoltaics installations were capped at 350MWp to control costs (later raised to 1000MWp in 2002 and removed in 2004).
The new act introduced a nationwide compensation scheme with the aim of spreading the remuneration burden on grid operators across all electricity utilities. This included a new EEG surcharge (EEG-Umlage) to fund the feed-in remunerations. The previous double-5% cap was duly removed.
The new act also introduced the 100,000 roofs program (100.000-Dächer-Programm). This ran until 2003 and offered low-interest loans for photovoltaic installations below 300MWp. It proved highly successful in combination with the FIT scheme and led to a rapid increase in photovoltaic capacity.
The first EEG amendment, effective from 16July 2003, introduced the 'special equalisation scheme' (Besondere Ausgleichsregelung), designed to unburden electricity-intensive industries from the rising EEG surcharge. To be eligible, companies had to fulfil the following criteria: electricity consumption of more than 100GWh/a, electricity expenses of more than 20% of gross value added, and a considerable impairment of competitiveness. Exempted firms pay only 0.05¢/kWh. As a result, non-privileged consumers faced a higher EEG surcharge. Arbitration on eligibility was by the Federal Office of Economics and Export Control (Bundesamt für Wirtschaft und Ausfuhrkontrolle).
The EEG was built on experience gained under the Electricity Feed-in Act. Without the prior act, the EEG would not have been as sophisticated or as far reaching. Notwithstanding, the Economics Ministry remained hostile to the concept of feed-in tariffs and refused to help with legal drafting.
An SPD/Greens coalition government, elected in 1998, paved the way for the reform of the Electricity Feed-in Act to give the EEG (2000).
PV Interim Act (2003)
The PV Interim Act (2003) raised photovoltaic tariffs from 1January 2004 and in particular for small rooftop installations, to compensate for the ending of low-interest loans under the expiring 100,000 roofs program. The limit on free-standing photovoltaic systems exceeding 100kWp and the 1000MWp cap on photovoltaic installations in total were both removed.
Renewable Energy Sources Act (2004)
An amended version of the EEG came into force on 1August 2004. While the basic framework remained unchanged, this act introduced a substantially modified and differentiated tariff structure, to better match the economic viabilities of the technologies concerned. Tariffs for biomass, photovoltaics, and geothermal energy were increased. Detailed measures were introduced to deal with market complexities, windfall profits, and the incentives for innovation and cost reduction. Eligible projects may no longer degrade ecologically sensitive areas. Exemptions for industry from the EEG surcharge under the special equalization scheme were extended considerably. The minimum electricity consumption requirement was reduced to 10GWh/a, the share of electricity costs relative to gross value added was reduced to 15%, and the impairment of competitiveness criteria was removed altogether. Railways were now automatically exempt, being regarded as an environmentally friendly form of transport.
Renewable targets were now defined in the act for the first time: 12.5% for the share of renewable energy in gross final electricity consumption by 2010 and at least 20% by 2020.
Thus the EEG (2004) resulted in significantly better conditions for photovoltaics, biomass (including small farm systems and new technologies), offshore wind, and geothermal energy, while onshore wind and small hydroelectric plant largely retained their former standing. The new special equalization scheme lead to wider benefits for industry. Only about 40 companies complied under the previous rules, mostly from the chemical, steel, and metals industries. That number climbed to between 120 and 350 with the new rules.
The European Union Emission Trading Scheme (EUETS) entered into effect on 1January 2005. Many industry lobbyists argued that the emissions trading transcended the need for a renewable electricity feed-in tariff scheme and that the EEG should therefore be scrapped. In December 2005 the European Commission released a report preferring feed-in tariffs for national renewable electricity support.
The 2004 legislation was overseen by an SPD/Greens coalition government.
Renewable Energy Sources Act (2009)
The 2009 amendments were undertaken alongside a boom in renewable electricity uptake. In 2009, renewables accounted for 16.3% of total electricity generation, up from 9.3% in 2004. Over the same period, the EEG surcharge climbed from 0.54¢/kWh to 1.32¢/kWh. For further context, the European Union climate and energy package, approved on 17December 2008, contained a year 2020 national target for Germany of 18% renewable energy in its total energy consumption.
The 2009 amendments yielded improvements for the entire range of renewables, increased the renewables target considerably, introduced new sustainability criteria for bioenergy, and extended industry privileges. Flexible degression rates were also introduced, which can now be adjusted without reference to the Bundestag. The legislation came into force on 1January 2009.
More specifically, the photovoltaic tariffs were reduced somewhat, but not enough to affect uptake. The degression for PV was tightened from 5% to 8–10%, depending on the size of installation. A new 'self-consumption incentive' granted a fixed tariff of 25.01¢/kWh for electricity consumed by a PV operator within their own house. A 'flexible degression cap' was introduced, under which the degression rate could be adjusted to keep the uptake of photovoltaics within a specified corridor. The support for onshore wind improved. The initial tariff was raised, the repowering (when old turbines are replaced by new) bonus (Repoweringbonus) was increased, and an additional system service bonus was granted for specified technical contributions (Systemdienstleistungen or SDL), including the ability to maintain voltage if the transmission grid fails. The tariff for offshore wind was raised substantially. An additional 'early starter bonus' was offered for offshore wind farms entering operation before 2015. In parallel to the EEG, a separate loan program of €5 billion was established, to be administered by the state-owned KfW bank, with the goal of reaching 25GW installed capacity for wind by 2030. Support of biomass was also increased, with special bonuses for a number of different biomass types. Biomass must also comply with specified ecological requirements to be eligible, these requirements being contained in a separate 'sustainability ordinance' (Nachhaltigkeitsverordnung or BioSt-NachV). The hydroelectricity tariffs were raised considerably and particularly for micro and small power plants. The tariffs for geothermal energy were raised considerably too, as was the cogeneration bonus. An additional 'early starter bonus' was introduced for geothermal projects put into operation before 2016. A 'green power privilege' (Grünstromprivileg) was introduced, which exempted electricity suppliers with a minimum quota of renewables from the EEG surcharge under certain circumstances. New measures allowed grid operators to temporarily limit wind turbine output in times of network congestion, with compensation payable to the plant owner for lost remuneration.
The renewable targets in the new law were increased to at least 35% (previously 20%) of total electricity production by 2020, 50% by 2030, 65% by 2040, and 80% by 2050.
The 2009 legislation was overseen by a CDU/CSU/SPD grand coalition government.
The government launched its national Energy Concept in September 2010. This represents a significant milestone in the development of energy policy in Germany. On 6June 2011, following Fukushima, the government removed the use of nuclear power as a bridging technology as part of their policy.
PV Act (2010)
It was becoming clear that action on the photovoltaic remuneration was necessary. The growth in photovoltaics had exceeded all expectations. In 2009 alone, MWp of capacity was installed. As a result, the support costs had skyrocketed.
The government responded with the PV Act (2010) which entered into force retrospectively with effect from 1July 2010. The legislation introduced a dramatic reduction in photovoltaic tariffs, cutting these between 8 and 13% depending on the installation type, followed by a second cut of 3%. The deployment corridor was doubled to between 2500 and 3500MWp, along with tighter growth-dependent degression rates of 1–12%, in addition to the ordinary degression of 9%. The self-consumption incentive was significantly raised to around 8¢/kWh and eligibility extended to systems up to 500kWp. The feed-in rate itself was dependent on the system size and the proportion of demand that was consumed on-site. Free-standing systems were excluded from using agricultural land.
PV Interim Act (2011)
The PV Interim Act (2011) introduced the possibility of further downward adjustments for the photovoltaic tariffs during the year. If the installed capacity during the first months of the year exceeded the equivalent of 3,500 MWp per year, feed-in tariffs would be lowered by 1July 2011 for rooftop systems and 1September 2011 for free-standing systems. It also modified the flexible cap to better control the growth of photovoltaics.
In application of the EEG (2009) version in force at the time, no further adjustment to the feed-in tariffs occurred in 2011. This is explained because the installed capacity between 28 February 2011 and 1 June 2011 was less than 875 MWp (which multiplied by 4, is below the 3,500 MWp threshold).
Renewable Energy Sources Act (2012)
The act was again modified and came into force on 1January 2012. The new EEG sought to advance the dynamic expansion of renewable electricity generation, control the rising costs associated with the scheme, and enhance market and grid integration, while adhering to the principles of a feed-in system. The revised system includes a market premium scheme, the market premium was intended to prepare renewables for the market and to eventually lower their dependence on explicit policy measures.
The rising shares of variable renewable generation had led to concerns about the ability of the electricity system to cope. The new act included measures for the grid integration of photovoltaic systems. Grid operators could now limit the feed-in of photovoltaics in times of grid overload, with the plant operators receiving compensation for their loss of revenue. A new ordinance required the retrofitting of photovoltaic systems to avoid the 50.2Hz problem – the risk of widespread blackouts as PV systems simultaneously tripped in the face of frequencies above 50.2Hz. Free-standing photovoltaic systems on nature conservation areas were excluded from remuneration. The tariff structure for onshore wind was basically maintained, but the degression was tightened from 1% to 1.5% to incentivize efficiency improvements. The system services bonus for onshore wind was extended and the repowering bonus was improved. Offshore wind gained through improved early starter provisions. The start of the degression was postponed until 2018, but increased from 5% to 7%. Starter tariffs were increased but now last 8 rather than 12years. Biomass tariffs were lowered by 10–15% on average, particularly for small systems. The biomass tariff system itself was greatly simplified, with now four size categories and two fuel categories. The degression for biomass was increased from 1% to 2%. The tariffs for hydroelectricity were simplified, the funding period now uniformly 20years, and the degression rate set at 1%. The tariffs for geothermal energy were raised and the start of the degression postponed until 2018, albeit at an increased rate. Electricity storage facilities were fully exempted from grid charges and are to be supported by a special research program.
Industry privileges under the special equalisation scheme were extended to include more companies and the tariff reductions further improved. The eligibility requirements were lowered from 10GWh/a to 1GWh/a and the electricity expenses threshold in terms of gross value added lowered from 15% to 14%. As a result, the number of exempt firms rose from 734 in 2012 to about 2057 in 2013. The exempted electricity load rose from 84.7TWh to 97TWh, a relatively modest increase due to the smaller sizes of the newly exempted firms.
Industrial self-consumption, previously exempted from the EEG surcharge, was now subject to the surcharge if the public grid was used, except in special circumstances. This measure was aimed at preventing abuse through contracting.
The introduction of an optional market premium was designed to support demand-oriented electricity generation. The market premium is the difference between the EEG tariff and the average spot market price. An additional management premium reimbursed administration costs and mitigated against market risks. For large biogas plants over 750kW, the use of direct marketing was made compulsory from 2014 onwards. An additional flexibility premium was introduced for gas storage at biogas facilities. The details of the market premium were to be provided in a following governmental directive, following parliamentary approval.
The green power privilege was also modified. Energy suppliers whose portfolio comprised more than 50% EEG-funded renewables had their surcharge reduced by 2¢/kWh, previously they had been fully exempt. In addition, a minimum share of 20% of fluctuating sources, namely wind and PV, was required.
The renewables targets remained unchanged and are identical to those specified in the 2010 Energy Concept.
In 2013, after numerous complaints, the European Commission opened an in-depth state aid investigation into the EEG surcharge exemptions for energy-intensive companies and into the green power privilege. The Commission nonetheless accepted that the underlying feed-in tariff and market premium schemes were compliment. On 10May 2016 the EU General Court sided with the Commission and determined that the EEG (2012) involved state aid as indicated. (The next EEG (2014) was specifically designed to resolve these difficulties.)
The 2012 legislation was overseen by a CDU/CSU/FDP coalition government.
PV Act (2013)
Despite the cutbacks in photovoltaic support, photovoltaic installations continued to boom. In December 2011 alone, 3000MWp were added in an effort to beat the tariff reductions beginning in 2012. Moreover, the EEG surcharge had grown to 3.53¢/kWh for 2011, with the largest component being photovoltaic remuneration. The EEG surcharge was projected to grow considerably, despite the falling tariff structure. For the first time, cost control became the "determining factor" in the political debate over the EEG.
This was despite the fact that the merit order effect had been depressing electricity spot prices. The merit order effect occurs when preferentially dispatched wind and photovoltaic generation displaces more expensive fossil fuel generation from the margin – often gas-fired combined cycle plant – thereby driving down the cleared price. This effect is more pronounced for photovoltaics because their midday peak correlates with the maximum generation requirement on the system. The merit order effect also lowers the revenues for conventional power plants and makes them less economically viable. A 2007 study finds that "in the case of the year 2006, the volume of the merit order effect exceeds the volume of the net support payments for renewable electricity generation which have to be paid by consumers". A 2013 study estimates the merit order effect of both wind and photovoltaic electricity generation for the years 2008–2012: the combined merit order effect of wind and photovoltaics ranges from 0.5¢/kWh in 2010 to more than 1.1¢/kWh in 2012.
The PV Act (2013) came into force retrospectively on 1April 2012. The tariff cuts were up to 30%, with the tariff cuts scheduled in the EEG (2012) for 1July 2012 advanced and tightened from their original 15%. The system size categories were changed, now up to 10, 40, 1000, and kWp. A new category of 10–40kWp was introduced, while free-standing systems were limited to 10MWp. The regular standard degression was set to 1% per month, equal to 11.4% per year, and replacing the previous six-monthly adjustment. The flexible cap for the deployment corridor remained unchanged at 2500 to 3500MWp per year. If new additions exceed this corridor, the degression rises by 1.0% up to 2.8%. A hard cap on the total photovoltaic capacity was introduced, set at 52GWp. The self-consumption privilege was removed for new installations, as grid parity was already met: the feed-in tariff for roof systems at 19.5¢/kWh was now lower than the average electricity price for households at 23¢/kWh. Changes to the market integration model reduced the eligibility for remuneration of systems between 10 and 1000kWp to 90% of their electricity production from 2014 onwards. The residual electricity could either be self-consumed or sold on the electricity market.
Renewable Energy Sources Act (2014)
The EEG (2014) is sometimes known as the EEG2.0 due to its marked departure from earlier legislation. This revision took effect from 1August 2014. The act is available in English. The act requires operators of new plant to market their electricity themselves. In turn they receive a market premium from the grid operator to compensate for the difference between the fixed EEG payment and the average spot price for electricity. The act also paved the way for a switch from specified feed-in tariffs to a system of tendering.
Purpose and aim
The purpose of the EEG (2014) is stated in the legislation:
The EEG (2014) also contains statutory targets for the share of renewable energy in gross final electricity consumption (the targets are additional to those set out in the government's 2010 Energy Concept statement):
Deployment corridors
The EEG (2014) specifies binding trajectories for the following individual technologies:
Details
The level of remuneration is still prescribed under the EEG until 2017. However the way that new installations receive their remuneration has changed. Most plant operators must now directly market their output, for which they get an additional market premium payment instead of an explicit feed-in tariff. This premium is the difference between the average monthly wholesale price at the EEX energy exchange and the fixed remuneration stated in the EEG. Installations under 100kW are exempt from these provisions and existing installations will continue to operate under the rules under which they were established. From 2014 to 2017 onwards, defined remuneration rates will be replaced by competitive bidding, also known as auctions or tenders. Those investors offering the lowest prices will then receive support. The new act does not specify the auction model in detail, but potential designs were piloted in 2015 using ground-mounted photovoltaic systems.
The flexible cap mechanism for expansion corridors was replaced with set annual targets for the addition of wind, photovoltaic, and biogas capacity. The government hopes these new corridors will lead to a better coordination between renewables and the use and expansion of the transmission network, as well as improving planning security for conventional generators.
The target corridor for photovoltaics is set at 2.4 to 2.6GWp per year and the hard cap of 52GWp (introduced in 2013) remains in place. Photovoltaic installations beyond this upper bound will not receive funding under the EEG. The remuneration for photovolatic installations is reduced 0.50 percent every month, unless the installed capacity in the preceding months is below or above the installed capacity target. The degression rate can increase or decrease according to the deviation from the 2,500 MWp goal during the twelve months prior to the beginning of each quarter. The corresponding degression rate is then used during the three months of the quarter, in the following way:
If installed capacity exceeds the target by more than 4,900 MWp, feed-in tariff decreases 2.80 percent
by more than 3,900 MWp, 2.50 percent,
by more than 2,900 MWp, 2.20 percent,
by more than 1,900 MWp, 1.80 percent,
by more than 900 MWp, 1.40 percent,
by up to 900 MWp, 1.00 percent.
If installed capacity falls between 2,400 and 2,600 MWp, the feed-in tariff decreases 0.50 percent.
If installed capacity is below the target by less than 900 MWp, feed-in tariff decreases 0.25 percent
by more than 900 MWp, the feed-in tariff remains the same,
by more than 1,400 MWp, to zero; the feed-in tariff may rise on a one-off basis by 1.50 percent on the first calendar day of the respective quarter.
Onshore wind retained its annual target of 2.4 to 2.6GW. However the target now excludes repowering, effectively extending the growth cap. The management premium and the bonus paid to wind farms providing stabilizing features (Systemdienstleistungen) are now being phased out. From 2016 onwards, the onshore wind tariff is reduced quarterly, depending on whether new capacity tracks the prescribed target. For offshore wind, the new act defines a target of 6.5GW by 2020 and 15GW by 2030. Offshore wind farms that entered service before 2020 can choose between a fixed payment for 8 years or a reduced payment for 12 years. After this period, the basic reward is reduced still further, depending on the distance from shore and the depth of the sea. The biomass target is set at 0.1GW per year. Only biogas plants that use biowaste and liquid manure will receive more than the standard remuneration, depending on their capacity. Tariffs are to be reduced by 0.5% on a three-monthly basis for new installations.
On 16April 2014 the European Commission found that EEG (2014) support for 20 offshore wind farms totalling almost 7GW was not state aid. On 23July 2014 the European Commission approved the EEG (2014), having assessed it to be in line with EU rules on state aid. Indeed, the EEG (2014) was the first revision of the Renewable Energy Sources Act to be "materially shaped by the Commission's view on state aid".
In July 2015 the Economics and Energy Ministry (BMWi) released a design document covering renewables auctions. In early 2016 the BMWi reported that the ground-mounted photovoltaics tender pilot, comprising three auction in 2015, was successful. The BMWi also stated that the competition was high and that prices fell from round to round. It added that small bidders were able to win tenders. These results will be used to develop auctions for other renewable electricity generation technologies.
The sixth and last round of PV auctions under this particular legislation produced 27successful bids totaling . The average successful price was and the lowest awarded price was . These figures confirm a falling trend from auction to auction.
The 2014 legislation was overseen by a CDU/CSU/SPD grand coalition government.
Renewable Energy Sources Act (2017)
The government began again to update of the EEG, first dubbed the EEG (2016), now the EEG (2017). The revised act is slated to take effect from 1January 2017.
The following explains some of the process prior to the final legislation. On 8December 2015 the government released its proposals for reform. On 8June 2016 the Federal Cabinet (Bundeskabinett) cleared the draft EEG (2016) bill. That bill will now go to the Bundestag and Bundesrat for consideration.
The reform is being driven by three guiding principles, namely the need:
"to keep within agreed deployment corridors for the development of renewable energy"
"to keep to a minimum the overall cost arising from the Renewable Energy Sources Act"
"to use auctions to create a level playing field for all of the players involved"
The government believes that the new auction system will control costs. The new system also accords with the desire of the European Commission for renewables support to be market-based. With regard to wind energy, the new rules are intended to encourage installations in sites with strong winds and across Germany. To this end, a suite of complex calculations (Referenzertragsmodell) are being developed to ensure that bids are comparable and payments are fair.
The proposed EEG (2016) is a continuation of the EEG (2014). It replaces prescribed feed-in tariffs with an auction system for the majority of renewable technologies. It repeats the deployment corridors specified in the EEG (2014) to control the uptake of renewable electricity over the next decade and to ensure that future renewable energy targets are honored. This corridor will be maintained by auctioning only a defined capacity each year. Only those renewables projects that bid successfully will receive EEG support for the electricity they supply over the following 20years. Each technology – photovoltaics, onshore wind, offshore wind, and biomass – will get an auction design tailored to its needs. Small renewables installations of under 750kW capacity or under 150kW for biomass will not be required to tender and will continue to receive conventional feed-in tariffs. Bidders from other European countries will be able to compete in the auctions for up to 5% of the annual capacity, under certain conditions. The new auction system should cover more than 80% of the new renewable electricity capacity.
As indicated above, the auction system was piloted in 2015 for ground-mounted photovoltaic facilities. As a result of this trial, the Economics and Energy Ministry (BMWi) abandoned 'uniform pricing' in favor of 'pay-as-bid'. The Federal Network Agency (Bundesnetzagentur) will call for tenders for renewable projects and set the capacity to correspond to the trajectory needed for a 40–45% share in 2025. Starting in 2017, there will be between three and four auctions per year for photovoltaics and onshore wind. Participants will submit single sealed bids and will have to provide a substantial security deposit to ensure good faith. Bids are tied to projects and locations and cannot normally be transferred. The lowest bids will win until the capacity under auction is met. A ceiling price is to be notified in advance. Successful projects will receive the funding rate with which they won for a period of 20years. Special rules apply for citizen energy projects: small projects are exempt from the auction system altogether and larger projects will receive the highest offer accepted in their round rather than their own possibly lower bid.
Onshore wind investors will also have to get prior approval for their projects under the Federal Immission Control Act (Bundes-Immissionsschutzgesetzes or BlmSchG), the federal law regulating the harmful effects of air pollution, noise, vibration and similar phenomena. Citizens cooperatives (Genossenschaft) participating in wind energy tenders have special dispensations. Wind energy auctions will be held more often in the beginning, with three in 2017 and four in 2018, in order to quickly establish a price level. The annual capacity for onshore wind farms will be set at 2.8GW per year for 2017 to 2019 and at 2.9GW thereafter. In order to better synchronise the development of the grid with renewables growth, the addition of onshore wind will be restricted to specified 'grid congestion zones' where high inputs of renewable electricity cannot be accepted because of network congestion. These areas are to be identified by the Federal Network Agency. The new rules on funding offshore wind farms will apply to those projects that commence operation in 2021 or later. From 2025, the government will specify the sites for future wind farms and investors will then compete for the right to build at those locations. This centralised (Danish) model is designed to ensure competition and to make project approvals, site planning, and network connections more cost effective and better integrated. Between 2021 and 2024 a transitional auction model will be used and wind farms that have been planned and approved but not built will compete in two rounds of tenders for a restricted amount of capacity. Offshore wind will remain capped at 15GW by 2030 and the capacity auctioned each year will be consistent with this target. In 2021, only wind farms in the Baltic Sea will be considered, due to a shortage of network connections at the North Sea. Biomass projects will also participate in the new auction system. Biomass capacity is to be expanded by 150MW annually in the next three years and by 200MW annually for the following three years. Installations with a capacity greater than 150kW will also be able to tender. Biomass facilities will only receive remuneration for half their runtime in order to incentivize their use during times of high electricity prices. Hydroelectricity, geothermal, and mine, landfill, and sewage gas are excluded from the auction system because of the prospect of insufficient competition.
On 20December 2016, the European Commission found that the EEG amendments are in line with EU rules governing state aid, thereby allowing the planned introduction on 1January 2017 to be honored.
This round of legislation is being overseen by a CDU/CSU/SPD grand coalition government.
Reactions
In January 2016, in response to the official proposals, Greenpeace Germany cautioned that a complete overhaul of the successful EEG would endanger climate protection targets. The German Wind Energy Association (BWE) and others are calling for a 2.5GW net capacity addition for onshore wind energy per annum that is not dependent on the increase of offshore wind. They also say that the 40–45% renewables target by 2025 should not be treated as a fixed ceiling. The German Engineering Federation (VDMA) said that "the EEG amendment gives rise to growing uncertainty in the industry" and that "it is however not right to regulate the expansion of renewable energy production by controlling the tendering volume for onshore wind energy and inflexibly clinging on to a 45% target in the electricity sector".
Estimates for 2012 suggest that almost half the renewable energy capacity in Germany is owned by citizens through energy cooperatives (Genossenschaft) and private installations. Critics worry that the new rules will preclude citizen participation, despite the special provisions for cooperatives and individuals. Preparing tenders is expensive (costing perhaps €50,000–100,000) and that expenditure is sunk if the bid fails. In January 2016 Greenpeace Energy said that renewables auctions would make the Energiewende less fair and that citizen cooperatives and small investors would be at a disadvantage. Germanwatch, WWF-Germany, and Deutsche Umwelthilfe (DUH), three German NGOs, said the proposed reforms do not properly account for small, citizen-owned renewables projects. Citizen participation is seen as a key reason for the widespread public acceptance of renewable technologies in Germany. That support may lag if the EEG reforms favor large companies over cooperatives and individuals.
Political positions ahead of 2017 elections
In November 2016, the CDU revealed that it is considering scrapping the EEG, although it remains undecided as to whether it will make this an election issue for 2017.
2019 European Court of Justice state aid ruling
In March 2019, the European Court of Justice ruled that feedin tariffs do not class as state aid, admissible or otherwise. This landmark decision annuls an earlier Commission decision that the German renewable energy law of 2012 involved state aid. More specifically, the ECJ found that the Commission had failed to establish that the advantages provided by feedin tariffs involved state resources and therefore constituted state aid.
Feed-in tariffs
The structure and development of feed-in tariffs over the course of the EEG is a complex topic. This section is simply intended to give an indication. The feed-in tariffs for all technologies applicable are listed here. The following table summarizes onshore wind energy remunerations from April 2000 to October 2016.
The table below summarizes photovoltaics remunerations from August 2004 to January 2012. , under the EEG mandate, the Federal Network Agency (Bundesnetzagentur) publishes the currently installed PV capacity with adjusted feed-in tariffs monthly as a downloadable spreadsheet. Otherwise, for data beyond January 2012, please see: feed-in tariffs in Germany.
Politics
The development of the EEG has been the subject of political science analysis. A 2006 study finds that "the regulatory framework is formed in a 'battle over institutions' where the German parliament, informed and supported by an advocacy coalition of growing strength, backed support policies for renewables sourced electricity against often reluctant governments and the opposition from nuclear and coal interests".
A 2016 thesis finds that two broad coalition of actors faced each other off over the development of the EEG legislation: an 'economic coalition' that opposed support for renewables and sought to protect nuclear power and fossil fuel interests and an 'environmental coalition' that took the opposite stance. The economic coalition wanted unassisted market competition to prevail and preferred large-scale facilities. The environmental coalition comprised environmental organizations, the renewables industry, farmers, the metal workers unions (IGBCE and IG Metall), a German engineering association (VDMA), partly the German Confederation of Skilled Crafts (ZDH), and some industrial corporations with renewables interests. When the EEG was proposed in the late-1990s, the incumbent energy companies markedly underestimated the technological potential of renewables, believing them to be suitable only for niche roles. They were not alone, almost all politicians and scientists of the time did so too. The opposition to the EEG was therefore muted. Concurrent lobbying over the nuclear phase-out (Atomausstieg) also diverted industry attention away from the EEG negotiations. Notwithstanding, the success of the EEG can be traced a small dedicated group of parliamentarians who forged an alliance between various business groups, unions, environmental NGOs, and other idealistic interest groups. Yet despite expectations, renewable generation came to account for 27.4% of gross electricity consumption in 2014 and seriously threatened the business model of the incumbents. As history shows, the environmental coalition prevailed till 2014 at least, underpinning the development of the EEG legislation, the nuclear phase-out, and the German Energiewende more generally.
Greenpeace Germany believes that ongoing EU/US TTIP trade agreement negotiations have influenced the EEG (2014) onwards. Earlier versions of the EEG could be interpreted as inhibiting free trade and that granting renewable energy preferential dispatch may still be illegal under the proposed treaty.
Effectiveness
Between 2015 and 2017, the fixed feed-in tariff scheme, introduced in 1991, is being phased out for around 80% of installations in favor of an auction system. This change is defined under the EEG (2014) and subsequent legislation.
Feed-in tariff scheme (pre-2015–2017)
Various studies have found that a fixed feed-in tariff scheme provides financial certainty and is more cost effective and less bureaucratic than other forms of support, including investment or production tax credits, quota-based renewable portfolio standards (RPS), and auction mechanisms. In 2008 the European Commission concluded that (although in 2014 it reversed its position to favor market-based instruments):
When the avoided external costs are compared to the compensation that renewable energy operators were paid for electricity from renewable energy, a 2003 study finds that the reduced environmental impacts and related economic benefits far outweigh the additional costs required to compensate the producers of electricity from renewable sources. Accounting for the external costs of fossil fuel use and thus "level[ing] the playing field" had been one of the key purposes when constructing the original EEG. A feed-in tariff scheme generates more competition, more jobs, and more rapid deployment for manufacturing and does not require the picking of technological winners, such as between wind power and photovoltaics. Denmark and Germany have been at the forefront of FIT scheme development.
A 2008 economics study by RWI Essen was hugely critical of the high levels of feed-in support afforded photovoltaics. The study argues that the 2005 European Union Emission Trading Scheme (EU ETS) was sufficient to drive the transition towards a low-carbon economy, that the EEG does nothing intrinsic to reduce greenhouse gas emissions, and that the electricity produced represents one of the most expensive greenhouse gas abatement options on offer.
Auction system (post-2015–2017)
In June 2016 economist Claudia Kemfert from DIW Berlin contended that the new auction system, introduced with the EEG (2014) and being refined under the proposed EEG (2016), will not reduce costs, but will rather undermine planning security and increase the risk premium applied by investors. In addition, the auction system will lead to deployment corridors being missed as companies holding tenders delay construction for whatever reason.
General
The positive impact on the environment globally is less clear. Hans-Werner Sinn, a German economist and chair of the Ifo Institut für Wirtschaftsforschung argues that Germany's renewable energy support reduces world market prices for fossil energy. Thus, countries like China or the US have an incentive to produce more, and the net effect on the climate is zero. This effect is known as the green paradox.
Outlook
Grid reinforcement
One challenge that lies ahead is integrating the electricity generated by decentralized renewable energy into the existing electricity grid structure. The grid was built to suit the centralized energy system of the then four main energy companies, namely, E.ON, EnBW, RWE, and Vattenfall.
The need for grid reinforcement from north to south is commonly recognized. In response, the four TSOs proposed 92 expansion projects covering 7300km of lines, but not all will be required or approved. In 2015 the Federal Network Agency (Bundesnetzagentur) released its report on grid expansion plans covering the next decade. Rapid development of the grid is being driven by the uptake of renewables and the phase-out of nuclear power.
But not all experts agree that a substantial build-out of the grid is necessary. Claudia Kemfert believes the large amount of coal-fired generation on the system is part of the problem. Kemfert said "our studies and models show that grid extension does no harm, but it's not strictly necessary decentralised, intelligent grids with demand management and, in the medium term, storage, would be much more important." Analysis for Greenpeace Germany in 2016 also suggests that it is inflexible coal and nuclear plants that are clogging the grid and driving up wholesale electricity prices.
Deployment corridors
The EEG (2014) specifies technology-specific deployment corridors (see table) which will be tracked by the new auction system. Environmental NGOs and renewable energy advocates argue that these corridors are insufficient to meet Germany's climate protection goals. Greenpeace Germany observes "to reduce renewables to 45% in 2025 means expanding the fossil [fuel] share to 55%, with the aim of mitigating the impact on large utilities". Patrick Graichen from the Berlin energy policy institute Agora Energiewende agrees that the deployment corridors are set too low to reach renewables targets beyond 2025.
A 2016 report by Volker Quaschning of HTW Berlin concludes that Germany will need to accelerate its renewables uptake by a factor of four or five to reach the lower 2015 Paris Agreement global warming target of 1.5°C. Moreover, this target will require the energy sector to be carbon free by 2040. Give the likely electrification of the transport and heating sectors, the deployment corridors laid out in the EEG (2014) are wholly inadequate. Onshore wind generation should instead grow by 6.3GW net per year (2.8GW is specified) and photovoltaics by 15GWp (2.5GWp is specified).
Economic aspects
A 2011 paper from DIW Berlin modeled the deployment of various renewable energy technologies until 2030 and quantified the associated economic effects. The uptake of renewable energy simultaneously creates business opportunities and imposes social costs for promotion. The study reveals that the continued expansion of renewable energy in Germany should benefit both economic growth and employment in the mid-term.
The Berlin energy policy institute Agora Energiewende predicts that the EEG surcharge will peak around 2023 and then decline. The reasons being that expensive projects committed at the beginning of the EEG in 2000 will begin to expire after their 20years of support, that new projects are now much cheaper, and that the trend of reducing generation cost will continue.
Energy sector transformation
In November 2016, Agora Energiewende reported on the new and several other related new laws. It concludes that this new legislation will bring "fundamental changes" for large sections of the energy industry, but have limited impact on the economy and on consumers.
See also
Electric power transmission
Electricity sector in Germany
Energiewende in Germany
Energy in Germany
Energy law
Energy policy
Feed-in tariffs in Germany
Financial incentives for photovoltaics
German Climate Action Plan 2050
Germany National Renewable Energy Action Plan
Green paradox
Power-to-X
Renewable energy in Germany
Renewable energy law
Solar power in Germany
Vehicle-to-grid
Wind power in Germany
World energy consumption
Notes
References
Further reading
Renewable Energy Sources Act (2000) text (in English)
Renewable Energy Sources Act (2014) text (in English)
2016 Revision amending the Renewable Energy Sources Act — Key points
External links
Clean Energy Wire (CLEW), a news service covering the energy transition in Germany
Energy Topics, hosted by the Federal Ministry for Economic Affairs and Energy (BMWi)
German Energy Blog, a legal blog covering the Energiewende
German Energy Transition, a comprehensive website maintained by the Heinrich Böll Foundation
REN21, the global renewable energy policy multi-stakeholder network
Energy economics
Energy law
Feed-in tariffs
Law of Germany
Renewable energy certification
Renewable energy economics
Renewable energy in Germany
Renewable energy law
Renewable energy policy
Resource economics | German Renewable Energy Sources Act | [
"Environmental_science"
] | 11,161 | [
"Energy economics",
"Environmental social science"
] |
33,352,238 | https://en.wikipedia.org/wiki/Plate%20rolling%20machine | A plate rolling machine is a machine that will roll different kinds of sheet metal into a round or conical shape.
It can be also called a “roll bending machine”, “plate bending machine” or “rolling machine”.
There are different kinds of technology to roll the metal plate:
Four-roller machines have a top roll, the pinching roll, and two side rolls.
The flat metal plate is placed in the machine on either side and "pre-bent" on the same side.
The side rolls do the work of bending. The pinching roll holds the plate.
Three-roller machines (variable pitch aka variable geometry) have one pressing top roll and two pressing side rolls.
The three-roll variable pitch works by having all three rolls able to move and tilt. The top roll moves in the vertical plane and the side rolls move on the horizontal plane.
When rolling, the top roll presses the metal plate between the two side rolls. The advantage of having the variable three roll is the ability to roll many thicknesses and diameters of cylinders.
For example;
The side-rolls are what produce the mechanical advantage. With the side rolls all the way open, one has the maximum mechanical advantage. With the side rolls all the way in, you have the least mechanical advantage.
So, a machine has the capability of rolling 2-inch-thick material with the maximum mechanical advantage, but a job is only 1/2 inch thick. Reduce the mechanical advantage and one has a machine that can roll from 1/2 to 2 inches thick.
Plate rollers can be powered and controlled in multiple ways. Older plate mills are driven by electric motors and newer ones are directed by programs that are loaded into the CNC controller. When thinking about plate roll acquisition, industrial machinery companies like Provetco Technology will ask about the working length of the roller, the maximum thickness of the material, top roll diameter size as well as the minimum thickness of the material. Furthermore, the material yield is another critical component to disclose to machinery companies when looking for a plate roller.
References
Provetco Technology: www.provetco.com
Industrial machinery | Plate rolling machine | [
"Engineering"
] | 431 | [
"Industrial machinery"
] |
33,352,815 | https://en.wikipedia.org/wiki/Interactions%20of%20actors%20theory | In information theory, Interactions of actors theory is a theory developed by Gordon Pask and Gerard de Zeeuw. It is a generalisation of Pask's earlier conversation theory: The chief distinction being that conversation theory focuses on analysing the specific features that allow a conversation to emerge between two participants, whereas interaction of actor's theory focuses on the broader domain of conversation in which conversations may appear, disappear, and reappear over time.
Overview
Interactions of actors theory was developed late in Pask's career. It is reminiscent of Freud's psychodynamics, Bateson's panpsychism (see "Mind and Nature: A Necessary Unity" 1980). Pask's nexus of analogy, dependence and mechanical spin produces the differences that are central to cybernetics.
While working with clients in the last years of his life, Pask produced an axiomatic scheme for his interactions of actors theory, less well-known than his conversation theory. Interactions of Actors, Theory and Some Applications, as the manuscript is entitled, is essentially a concurrent spin calculus applied to the living environment with strict topological constraints. One of the most notable associates of Gordon Pask, Gerard de Zeeuw, was a key contributor to the development of interactions of actors theory.
Interactions of actors theory is a process theory. As a means to describe the interdisciplinary nature of his work, Pask would make analogies to physical theories in the classic positivist enterprises of the social sciences. Pask sought to apply the axiomatic properties of agreement or epistemological dependence to produce a "sharp-valued" social science with precision comparable to the results of the hard sciences. It was out of this inclination that he would develop his interactions of actors theory. Pask's concepts produce relations in all media and he regarded IA as a process theory. In his complementarity principle he stated "Processes produce products and all products (finite, bounded coherences) are produced by processes".
Most importantly Pask also had his exclusion principle. He proved that no two concepts or products could be the same because of their different histories. He called this the "No Doppelgangers" clause or edict. Later he reflected "Time is incommensurable for Actors". He saw these properties as necessary to produce differentiation and innovation or new coherences in physical nature and, indeed, minds.
In 1995, Pask stated what he called his Last Theorem: "Like concepts repel and unlike concepts attract". For ease of application Pask stated the differences and similarities of descriptions (the products of processes) were context and perspective dependent. In the last three years of his life Pask presented models based on Knot theory knots which described minimal persisting concepts. He interpreted these as acting as computing elements which exert repulsive forces to interact and persist in filling the space. The knots, links and braids of his entailment mesh models of concepts, which could include tangle-like processes seeking "tail-eating" closure, Pask called "tapestries".
His analysis proceeded with like seeming concepts repelling or unfolding but after a sufficient duration of interaction (he called this duration "faith") a pair of similar or like-seeming concepts will always produce a difference and thus an attraction. Amity (availability for interaction), respectability (observability), responsibility (able to respond to stimulus), unity (not uniformity) were necessary properties to produce agreement (or dependence) and agreement-to-disagree (or relative independence) when Actors interact. Concepts could be applied imperatively or permissively when a Petri (see Petri net) condition for synchronous transfer of meaningful information occurred. Extending his physical analogy Pask associated the interactions of thought generation with radiation : "operations generating thoughts and penetrating conceptual boundaries within participants, excite the concepts bounded as oscillators, which, in ridding themselves of this surplus excitation, produce radiation"
In sum, IA supports the earlier kinematic conversation theory work where minimally two concurrent concepts were required to produce a non-trivial third. One distinction separated the similarity and difference of any pair in the minimum triple. However, his formal methods denied the competence of mathematics or digital serial and parallel processes to produce applicable descriptions because of their innate pathologies in locating the infinitesimals of dynamic equilibria (Stafford Beer's "Point of Calm"). He dismissed the digital computer as a kind of kinematic "magic lantern". He saw mechanical models as the future for the concurrent kinetic computers required to describe natural processes. He believed that this implied the need to extend quantum computing to emulate true field concurrency rather than the current von Neumann architecture.
Reviewing IA he said:
Interaction of actors has no specific beginning or end. It goes on forever. Since it does so it has very peculiar properties. Whereas a conversation is mapped (due to a possibility of obtaining a vague kinematic, perhaps picture-frame image, of it, onto Newtonian time, precisely because it has a beginning and end), an interaction, in general, cannot be treated in this manner. Kinematics are inadequate to deal with life: we need kinetics. Even so as in the minimal case of a strict conversation we cannot construct the truth value, metaphor or analogy of A and B. The A, B differences are generalizations about a coalescence of concepts on the part of A and B; their commonality and coherence is the similarity. The difference (reiterated) is the differentiation of A and B (their agreements to disagree, their incoherences). Truth value in this case meaning the coherence between all of the interacting actors.
He added:
It is essential to postulate vectorial times (where components of the vectors are incommensurate) and furthermore times which interact with each other in the manner of Louis Kaufmann's knots and tangles.
In experimental Epistemology Pask, the "philosopher mechanic", produced a tool kit to analyse the basis for knowledge and criticise the teaching and application of knowledge from all fields: the law, social and system sciences to mathematics, physics and biology. In establishing the vacuity of invariance Pask was challenged with the invariance of atomic number. "Ah", he said "the atomic hypothesis". He rejected this instead preferring the infinite nature of the productions of waves.
Pask held that concurrence is a necessary condition for modelling brain functions and he remarked IA was meant to stand AI, Artificial Intelligence, on its head. Pask believed it was the job of cybernetics to compare and contrast. His IA theory showed how to do this. Heinz von Foerster called him a genius, "Mr. Cybernetics", the "cybernetician's cybernetician".
Hewitt's actor model
The Hewitt, Bishop and Steiger approach concerns sequential processing and inter-process communication in digital, serial, kinematic computers. It is a parallel or pseudo-concurrent theory as is the theory of concurrency. See Concurrency. In Pask's true field concurrent theory kinetic processes can interrupt (or, indeed, interact with) each other, simply reproducing or producing a new resultant force within a coherence (of concepts) but without buffering delays or priority.
No Doppelgangers
"There are no Doppelgangers" is a fundamental theorem, edict or clause of cybernetics due to Pask in support of his theories of learning and interaction in all media: conversation theory and interactions of actors theory. It accounts for physical differentiation and is Pask's exclusion principle. It states no two products of concurrent interaction can be the same because of their different dynamic contexts and perspectives. No Doppelgangers is necessary to account for the production by interaction and intermodulation (c.f. beats) of different, evolving, persisting and coherent forms. Two proofs are presented both due to Pask.
Duration proof
Consider a pair of moving, dynamic participants and producing an interaction . Their separation will vary during . The duration of observed from will be different from the duration of observed from .
Let and be the start and finish times for the transfer of meaningful information, we can write:
Thus
A ≠ B
Q.E.D.
Pask remarked:
Conversation is defined as having a beginning and an end and time is vectorial. The components of the vector are commensurable (in duration). On the other hand actor interaction time is vectorial with components that are incommensurable. In the general case there is no well-defined beginning and interaction goes on indefinitely. As a result the time vector has incommensurable components. Both the quantity and quality differ.
No Doppelgangers applies in both the conversation theory's kinematic domain (bounded by beginnings and ends) where times are commensurable and in the eternal kinetic interactions of actors domain where times are incommensurable.
Reproduction proof
The second proof is more reminiscent of R.D. Laing: Your concept of your concept is not my concept of your concept—a reproduced concept is not the same as the original concept. Pask defined concepts as persisting, countably infinite, recursively packed spin processes (like many cored cable, or skins of an onion) in any medium (stars, liquids, gases, solids, machines and, of course, brains) that produce relations.
Here we prove A(T) ≠ B(T).
D means "description of" and <Con A(T), D A(T)> reads A's concept of T produces A's description of T, evoking Dirac notation (required for the production of the quanta of thought: the transfer of "set-theoretic tokens", as Pask puts it in 1996).
TA = A(T) = <Con A(T), D A(T)>, A's Concept of T,
TB = B(T) = <Con B(T), D B(T)>, B's Concept of T,
or, in general
TZ = Z(T) = <Con Z (T), D Z(T)>,
also, in general
AA = A(A) = <Con A(A), D A(A)>, A's Concept of A,
AB = A(B) = <Con A(B), D A(B)>, A's Concept of B.
and vice versa, or, in general terms
ZZ = Z(Z) = <Con Z(Z), D Z>,
given that for all Z and all T, the concepts
TA = A(T) is not equal to TB = B(T)
and that
AA = A(A) is not equal to BA = B(A) and vice versa, hence, there are no Doppelgangers.
Q.E.D.
A mechanical model
Pask attached a piece of string to a bar with three knots in it. Then he attached a piece of elastic to the bar with three knots in it. One observing actor, A, on the string would see the knotted intervals on the other actor as varying as the elastic was stretched and relaxed corresponding to the relative motion of B as seen from A. The knots correspond to the beginning of the experiment then the start and finish of the A/B interaction. Referring to the three intervals, where x, y, z, are the separation distances of the knots from the bar and each other, he noted x > y > z on the string for participant A does not imply x > z for participant B on the elastic. A change of separation between A and B producing Doppler shifts during interaction, recoil or the differences in relativistic proper time for A and B, would account for this for example. On occasion a second knotted string was tied to the bar representing coordinate time.
Further context
To set in further context Pask won a prize from Old Dominion University for his complementarity principle: "All processes produce products and all products are produced by processes". This can be written:
Ap(ConZ(T)) => DZ (T) where => means produces and Ap means the "application of", D means "description of" and Z is the concept mesh or coherence of which T is part. This can also be written
<Ap(ConZ (T)), DZ (T)>.
Pask distinguishes Imperative (written &Ap or IM) from Permissive Application (written Ap) where information is transferred in the Petri net manner, the token appearing as a hole in a torus producing a Klein bottle containing recursively packed concepts.
Pask's "hard" or "repulsive" carapace was a condition he required for the persistence of concepts. He endorsed Nicholas Rescher's coherence theory of truth approach where a set membership criterion of similarity also permitted differences amongst set or coherence members, but he insisted repulsive force was exerted at set and members' coherence boundaries. He said of G. Spencer Brown's Laws of Form that distinctions must exert repulsive forces. This is not yet accepted by Spencer Brown and others. Without a repulsion, or Newtonian reaction at the boundary, sets, their members or interacting participants would diffuse away forming a "smudge"; Hilbertian marks on paper would not be preserved. Pask, the mechanical philosopher, wanted to apply these ideas to bring a new kind of rigour to cybernetic models.
Some followers of Pask emphasise his late work, done in the closing chapter of his life, which is neither as clear nor as grounded as the prior decades of research and machine- and theory-building. This tends to skew the impression gleaned by researchers as to Pask's contribution or even his lucidity.
References
Information theory | Interactions of actors theory | [
"Mathematics",
"Technology",
"Engineering"
] | 2,884 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
33,353,907 | https://en.wikipedia.org/wiki/Digital%20storage%20oscilloscope | A digital storage oscilloscope (DSO) is an oscilloscope which stores and analyses the input signal digitally rather than using analog techniques. It is now the most common type of oscilloscope in use because of the advanced trigger, storage, display and measurement features which it typically provides.
The input analogue signal is sampled and then converted into a digital record of the amplitude of the signal at each sample time. The sampling frequency should be not less than the Nyquist rate to avoid aliasing. These digital values are then turned back into an analogue signal for display on a cathode ray tube (CRT), or transformed as needed for the various possible types of output—liquid crystal display, chart recorder, plotter or network interface.
Digital storage oscilloscope costs vary widely; bench-top self-contained instruments (complete with displays) start at or even less, with high-performance models selling for tens of thousands of dollars. Small, pocket-size models, limited in function, may retail for as little as US$50.
Comparison with analog storage
The principal advantage over analog storage is that the stored traces are as bright, as sharply defined, and written as quickly as non-stored traces. Traces can be stored indefinitely or written out to some external data storage device and reloaded. This allows, for example, comparison of an acquired trace from a system under test with a standard trace acquired from a known-good system. Many models can display the waveform prior to the trigger signal.
Digital oscilloscopes usually analyze waveforms and provide numerical values as well as visual displays. These values typically include averages, maxima and minima, root mean square (RMS) and frequencies. They may be used to capture transient signals when operated in a single sweep mode, without the brightness and writing speed limitations of an analog storage oscilloscope.
The displayed trace can be manipulated after acquisition; a portion of the display can be magnified to make fine detail more visible, or a long trace can be examined in a single display to identify areas of interest. Many instruments allow a stored trace to be annotated by the user.
Most digital oscilloscopes use flat-panel displays similar to those made in high volumes for computers and television displays.
Digital storage oscilloscopes may include interfaces such as a parallel printer port, RS-232 serial port, IEEE-488 bus, USB port, or Ethernet, allowing remote or automatic control and transfer of captured waveforms to external display or storage.
PC based
A personal computer-based digital oscilloscope relies on a PC for user interface and display. The "front end" circuits, consisting of input amplifiers and analog to digital converters, are packaged separately and communicate with the PC over USB, Ethernet, or other interfaces. In one format, the "front end" is assembled on a plug-in expansion card that plugs into the computer backplane. PC based oscilloscopes may be less costly than an equivalent self-contained instrument as they can use the memory, display and keyboard of the attached PC. Displays may be larger, and acquired data can be easily transferred to PC hosted application software such as spread sheets. However, the interface to the host PC may limit the maximum data rate for acquisition, and the host PC may produce sufficient electromagnetic noise to interfere with measurements.
References
External links
Digital Storage Oscilloscope Measurement Basics
The Effective Number of Bits (ENOB)
The Impact of Digital Oscilloscope Blind Time on Your Measurements
Benefits of a Digital Trigger System
Electronic test equipment
Laboratory equipment
Signal processing | Digital storage oscilloscope | [
"Technology",
"Engineering"
] | 731 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Electronic test equipment",
"Measuring instruments"
] |
33,357,388 | https://en.wikipedia.org/wiki/Quantum%20non-equilibrium | Quantum non-equilibrium is a concept within stochastic formulations of the De Broglie–Bohm theory of quantum physics.
Overview
In quantum mechanics, the Born rule states that the probability density of finding a system in a given state, when measured, is proportional to the square of the amplitude of the system's wavefunction at that state, and it constitutes one of the fundamental axioms of the theory.
This is not the case for the De Broglie–Bohm theory, where the Born rule is not a basic law. Rather, in this theory the link between the probability density and the wave function has the status of a hypothesis, called the quantum equilibrium hypothesis, which is additional to the basic principles governing the wave function, the dynamics of the quantum particles and the Schrödinger equation. (For mathematical details, refer to the derivation by Peter R. Holland.)
Accordingly, quantum non-equilibrium describes a state of affairs where the Born rule is not fulfilled; that is, the probability to find the particle in the differential volume at time t is unequal to
Recent advances in investigations into properties of quantum non-equilibrium states have been performed mainly by theoretical physicist Antony Valentini, and earlier steps in this direction were undertaken by David Bohm, Jean-Pierre Vigier, Basil Hiley and Peter R. Holland. The existence of quantum non-equilibrium states has not been verified experimentally; quantum non-equilibrium is so far a theoretical construct. The relevance of quantum non-equilibrium states to physics lies in the fact that they can lead to different predictions for results of experiments, depending on whether the De Broglie–Bohm theory in its stochastic form or the Copenhagen interpretation is assumed to describe reality. (The Copenhagen interpretation, which stipulates the Born rule a priori, does not foresee the existence of quantum non-equilibrium states at all.) That is, properties of quantum non-equilibrium can make certain classes of Bohmian theories falsifiable according to the criterion of Karl Popper.
In practice, when performing Bohmian mechanics computations in quantum chemistry, the quantum equilibrium hypothesis is simply considered to be fulfilled, in order to predict system behaviour and the outcome of measurements.
Relaxation to equilibrium
The causal interpretation of quantum mechanics has been set up by de Broglie and Bohm as a causal, deterministic model, and it was extended later by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties.
Bohm and other physicists, including Valentini, view the Born rule linking to the probability density function as representing not a basic law, but rather as constituting a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of However, it is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place.
In 1991, Valentini provided indications for deriving the quantum equilibrium hypothesis which states that in the framework of the pilot wave theory. (Here, stands for the collective coordinates of the system in configuration space). Valentini showed that the relaxation may be accounted for by an H-theorem constructed in analogy to the Boltzmann H-theorem of statistical mechanics.
Valentini's derivation of the quantum equilibrium hypothesis was criticized by Detlef Dürr and co-workers in 1992, and the derivation of the quantum equilibrium hypothesis has remained a topic of active investigation.
Numerical simulations demonstrate a tendency for Born rule distributions to arise spontaneously at short time scales.
Predicted properties of quantum non-equilibrium
Valentini showed that his expansion of the De Broglie–Bohm theory would allow “signal nonlocality” for non-equilibrium cases in which thereby violating the assumption that signals cannot travel faster than the speed of light.
Valentini furthermore showed that an ensemble of particles with known wave function and known nonequilibrium distribution could be used to perform, on another system, measurements that violate the uncertainty principle.
These predictions differ from predictions that would result from approaching the same physical situation by means of the standard axioms of quantum mechanics and therefore would in principle make the predictions of this theory accessible to experimental study. As it is unknown whether or how quantum non-equilibrium states can be produced, it is difficult or impossible to perform such experiments.
However, also the hypothesis of quantum non-equilibrium Big Bang gives rise to quantitative predictions for nonequilibrium deviations from quantum theory which appear to be more easily accessible to observation.
Notes
References
Antony Valentini: Signal-locality, uncertainty, and the sub-quantum H-theorem, II, Physics Letters A, vol. 158, no. 1, 1991, p. 1–8
Antony Valentini: Signal-locality, uncertainty, and the sub-quantum H-theorem, I, Physics Letters A, vol. 156, no. 5, 1991
Craig Callender: The emergence and interpretation of probability in Bohmian mechanics (slightly longer and uncorrected version of the paper published in Studies in History and Philosophy of modern Physics 38 (2007), 351–370)
Detlef Dürr et al.: Quantum equilibrium and the origin of absolute uncertainty, arXiv:quant-ph/0308039v1 6 August 2003
Samuel Colin: Quantum non-equilibrium and relaxation to equilibrium for a class of de Broglie–Bohm-type theories, 2010 New Journal of Physícs 12 043008 (abstract, fulltext)
Concepts in physics
Quantum mechanics | Quantum non-equilibrium | [
"Physics"
] | 1,162 | [
"Theoretical physics",
"Quantum mechanics",
"nan"
] |
33,357,837 | https://en.wikipedia.org/wiki/Milacemide | Milacemide (INN) is an MAO-B inhibitor and glycine prodrug. It has been studied for its effects on human memory and as a potential treatment for the symptoms of Alzheimer's disease. Early clinical trials did not show positive results however, and the drug is now abandoned and it is sold as a nonprescription drug or supplement. While milacemide is not an amino-acid, it acts similarly to glycine in the brain.
References
Abandoned drugs
Amino acid derivatives
Monoamine oxidase inhibitors
Glycine receptor agonists
NMDA receptor agonists
Prodrugs
Pentyl compounds | Milacemide | [
"Chemistry"
] | 131 | [
"Chemicals in medicine",
"Drug safety",
"Prodrugs",
"Abandoned drugs"
] |
30,839,887 | https://en.wikipedia.org/wiki/Indium%20arsenide%20antimonide%20phosphide | Indium arsenide antimonide phosphide () is a semiconductor material.
InAsSbP has been used as blocking layers for semiconductor laser structures, as well as for the mid-infrared light-emitting diodes and lasers, photodetectors and thermophotovoltaic cells.
InAsSbP layers can be grown by heteroepitaxy on indium arsenide, gallium antimonide and other materials.
See also
Aluminium gallium indium phosphide
Gallium indium arsenide antimonide phosphide
References
III-V semiconductors
Indium compounds
Arsenides
Antimonides
Phosphides
III-V compounds | Indium arsenide antimonide phosphide | [
"Physics",
"Chemistry",
"Materials_science"
] | 145 | [
"Materials science stubs",
"Inorganic compounds",
"Semiconductor materials",
"Condensed matter physics",
"III-V semiconductors",
"Condensed matter stubs",
"III-V compounds"
] |
30,841,427 | https://en.wikipedia.org/wiki/Polarization%20rotator | A polarization rotator is an optical device that rotates the polarization axis of a linearly polarized light beam by an angle of choice. Such devices can be based on the Faraday effect, on birefringence, or on total internal reflection. Rotators of linearly polarized light have found widespread applications in modern optics since laser beams tend to be linearly polarized and it is often necessary to rotate the original polarization to its orthogonal alternative.
Faraday rotators
A Faraday rotator consists of an optical material in a magnetic field. When light propagates in the material, interaction with the magnetic field causes left- and right-handed circularly polarized waves to propagate with slightly different phase velocities. Since a linearly-polarized wave can be described as a superposition of left- and right-handed circularly polarized waves, the difference in phase velocity causes the polarization direction of a linearly-polarized wave to rotate as it propagates through the material. The direction of the rotation depends on whether the light is propagating with or against the direction of the magnetic field: a rotation induced by passing through the material is not undone by passing through it in the opposite direction. This can be used to make an optical isolator.
Birefringent rotators
Half-wave plates and quarter-wave plates alter the polarization of light due to the principle of birefringence. Their performance is wavelength-specific; a fact that may be a limitation. Switchable wave plates can also be manufactured out of liquid crystals, ferro-electric liquid crystals, or magneto-optic crystals. These devices can be used to rapidly change the angle of polarization in response to an electric signal, and can be used for rapid polarization state generation (PSG) or polarization state analysis (PSA) with high accuracy. In particular, the PSG and PSA made with magneto-optic (MO) switches have been successfully used to analyze polarization mode dispersion (PMD) and polarization dependent loss (PDL) with accuracies not obtainable with rotating waveplate methods, thanks to the binary nature of the MO switches. Furthermore, MO switches have also been successfully adopted to generate differential group delay for PMD compensation and PMD emulation applications.
Prism rotators
Prism rotators use multiple internal reflections to produce beams with rotated polarization. Because they are based on total internal reflection, they are broadband—they work over a broad range of wavelengths.
Double Fresnel rhomb A double Fresnel rhomb rotates the linear polarization axis by 90° using four internal reflections. A disadvantage may be a low ratio of useful optical aperture to length.
Broadband prismatic rotator A broadband prismatic rotator rotates the linear polarization by 90° using seven internal reflections to induce collinear rotation, as shown in the diagram. The polarization is rotated in the second reflection, but that leaves the beam in a different plane and at a right angle relative to the incident beam. The other reflections are necessary to yield a beam with its polarization rotated and collinear with the input beam. These rotators are reported to have transmission efficiencies better than 94%.
See also
Optical rotation
References
Optical devices
Polarization (waves) | Polarization rotator | [
"Physics",
"Materials_science",
"Engineering"
] | 683 | [
"Glass engineering and science",
"Polarization (waves)",
"Optical devices",
"Astrophysics"
] |
30,845,073 | https://en.wikipedia.org/wiki/Cholesterol%20total%20synthesis | Cholesterol total synthesis in chemistry describes the total synthesis of the complex biomolecule cholesterol and is considered a great scientific achievement. The research group of Robert Robinson with John Cornforth (Oxford University) published their synthesis in 1951 and that of Robert Burns Woodward with Franz Sondheimer (Harvard University) in 1952. Both groups competed for the first publication since 1950 with Robinson having started in 1932 and Woodward in 1949. According to historian Greg Mulheirn the Robinson effort was hampered by his micromanagement style of leadership and the Woodward effort was greatly facilitated by his good relationships with chemical industry. Around 1949 steroids like cortisone were produced from natural resources but expensive. Chemical companies Merck & Co. and Monsanto saw commercial opportunities for steroid synthesis and not only funded Woodward but also provided him with large quantities of certain chemical intermediates from pilot plants. Hard work also helped the Woodward effort: one of the intermediate compounds was named Christmasterone as it was synthesized on Christmas Day 1950 by Sondheimer.
Other cholesterol schemes have also been developed: racemic cholesterol was synthesized in 1966 by W.S. Johnson, the enantiomer of natural cholesterol was reported in 1996 by Rychnovsky and Mickus, in 2002 by Jiang & Covey and again in 2008 by Rychnovsky and Belani.
The molecule
Cholesterol is a tetracyclic alcohol and a type of sterol. Added to the sterol frame with the alcohol group at position 3 are 2 methyl groups at carbon positions 10 and 13 and a 2-isooctyl group at position 17. The molecule is unsaturated at position 5,6 with an alkene group. The total number of stereocenters is 8. The unnatural cholesterol molecule that has also been synthesized is called ent-cholesterol.
Robinson synthesis
The Robinson synthesis is an example of a so-called relay synthesis. As many of the chemical intermediates (all steroids) were already known and available from natural resources all that was needed for a formal synthesis was proof that these intermediates could be linked to each other via chemical synthesis. Starting point for the Robinson synthesis was 1,6-dihydroxynaphthalene 1 that was converted in about 20 steps into the then already known androsterone 4. Ruzicka had already demonstrated in 1938 that androsterone could be converted into androstenedione 5 and Robinson demonstrated its conversion to dehydroepiandrosterone 6 (note the epimerized hydroxyl group) also already a known compound. Conversion of 6 to pregnenolone 7 and then to allopregnanolone 8 allowed the addition of the tail group as the acetate in 9 and then conversion to cholestanol 10.
The conversion of cholestanol to cholesterol was already demonstrated by oxidation of the ketone, bromination to the bromoketone and elimination to the enone.
The conversion of cholestenone into cholesterol by the method of Dauben and Eastham (1950) consisted of reduction of the enol acetate (lithium aluminium hydride) and fractionation with digitonin for the isolation of the correct isomer.
Woodward synthesis
Starting point for the Woodward synthesis was the hydroquinone 1 that was converted to cis-bicycle 2 in a Diels-Alder reaction with butadiene. Conversion to the desired trans isomer 5 was accomplished by synthesis of the sodium enolate salt 4 (benzene, sodium hydride) followed by acidification. Reduction (lithium aluminium hydride) then gave diol 6, a dehydration (HCl/water) gave ketol 7, deoxygenation of its acetate by elemental zinc gave enone 8, formylation (ethyl formate) gave enol 9, Michael ethyl vinyl ketone addition (potassium t-butoxide/t-butanol) gave dione 11 which on reaction with KOH in dioxane gave tricycle 12 in an aldol condensation with elimination of the formyl group. In the next series of steps oxidation (osmium tetroxide) gave diol 13, protection (acetone/copper sulfate) gave acetonide 14, hydrogenation (palladium-strontium carbonate) gave 15, formylation (ethyl formate) gave enol 16 which protected as the enamine 17 (N-methylaniline/methanol) gave via the potassium anion 18, carboxylic acid 19 by reaction with cyanoethylene using triton B as the base.
Acid 19 was converted to lactone 20 (acetic anhydride, sodium acetate) and reaction with methylmagnesium chloride gave tetracyclic ketone 21. Treatment with periodic acid (dioxane) and piperidine acetate (benzene) gave aldehyde 24 through diol 22 (oxidation) and dialdehyde 23 (aldol condensation). Sodium dichromate oxidation gave carboxylic acid 25, Diazomethane treatment gave methyl ester 26 and sodium borohydride the allyl alcohol 27. Chiral resolution of this racemic compound with digitonin produced chiral 28 and on Oppenauer oxidation chiral 29. Hydrogenation (Adams' catalyst) gave alcohol 30, chromic acid oxidation gave ketone 31, sodium borohydride reduction stereoselectively gave alcohol 32, hydrolysis followed by acylation gave acetate 33, thionyl chloride treatment gave acyl chloride 34 and methyl cadmium the ketone 35.
In the final stages reaction of 35 with isohexylmagnesium bromide 36 gave diol 37, acetic acid treatment gave dehydration and then hydrogenation gave acetate 38. Hydrolysis of this ester gave cholestanol 39. The route from cholestanol to cholesterol was already known (see: Robinson synthesis).
External links
Woodward Cholesterol Synthesis @ SynArchive.com
References
Total synthesis | Cholesterol total synthesis | [
"Chemistry"
] | 1,272 | [
"Total synthesis",
"Chemical synthesis"
] |
30,851,817 | https://en.wikipedia.org/wiki/Alpha%20chain | The term alpha chain is normally used to indicate one of the subunits of a multi-subunit protein. The term "chain" is a general term given to any peptide sequence. It can often refer more specifically to mean:
a part of the T-cell receptor,
the fibrinogen alpha chain,
the integrin alpha chain,
Hemoglobin, alpha 1
It should be distinguished from the term alpha helix, which refers to one of the common secondary structures found in proteins, along with beta sheet.
See also
Fibrinogen
Peptide
References
Protein structure | Alpha chain | [
"Chemistry"
] | 116 | [
"Molecular biology stubs",
"Protein structure",
"Structural biology",
"Molecular biology"
] |
26,127,170 | https://en.wikipedia.org/wiki/Monobloc%20engine | A monobloc or en bloc engine is an internal-combustion piston engine some of whose major components (such as cylinder head, cylinder block, or crankcase) are formed, usually by casting, as a single integral unit, rather than being assembled later. This has the advantages of improving mechanical stiffness, and improving the reliability of the sealing between them.
Monobloc techniques date back to the beginnings of the internal combustion engine. Use of this term has changed over time, usually to address the most pressing mechanical problem affecting the engines of its day. There have been three distinct uses of the technique:
Cylinder head and cylinder
Cylinder block
Cylinder block and crankcase
In most cases, any use of the term describes single-unit construction that is opposed to the more common contemporary practice. Where the monobloc technique has later become the norm, the specific term fell from favour. It is now usual practice to use monobloc cylinders and crankcases, but a monobloc head (for a water-cooled inline engine at least) would be regarded as peculiar and obsolescent.
Cylinder head
The head gasket is the most highly stressed static seal in an engine, and was a source of considerable trouble in early years. The monobloc cylinder head forms both cylinder and head in one unit, thus averting the need for a seal.
Along with head gasket failure, one of the least reliable parts of the early petrol engine was the exhaust valve, which tended to fail by overheating. A monobloc head could provide good water cooling, thus reduced valve wear, as it could extend the water jacket uninterrupted around both head and cylinder. Engines with gaskets required a metal-to-metal contact face here, disrupting water flow.
The drawback to the monobloc head is that access to the inside of the combustion chamber (the upper volume of the cylinder) is difficult. Access through the cylinder bore is restricted for machining the valve seats, or for inserting angled valves. An even more serious restriction is de-coking and re-grinding valve seats, a regular task on older engines. Rather than removing the cylinder head from above, the mechanic must remove pistons, connecting rods and the crankshaft from beneath.
One solution to this for side-valve engines was to place a screwed plug directly above each valve, and to access the valves through this (illustrated). The tapered threads of the screwed plug provided a reliable seal. For low-powered engines this was a popular solution for some years, but it was difficult to cool this plug, as the water jacket didn't extend into the plug. As performance increased, it also became important to have better combustion chamber designs with less "dead space". One solution was to place the spark plug in the centre of this plug, which at least made use of the space. This placed the spark plug further from the combustion chamber, leading to long flame paths and slower ignition.
During World War I, development of the internal combustion engine greatly progressed. After the war, as civilian car production resumed, the monobloc cylinder head was required less frequently. Only high-performance cars such as the Leyland Eight of 1920 persisted with it. Bentley and Bugatti were other racing marques who notably adhered to them, through the 1920s and into the 1930s, most famously being used in the purpose-built American Offenhauser straight-four racing engines, first designed and built in the 1930s.
Aircraft engines at this time were beginning to use high supercharging pressures, increasing the stress on their head gaskets. Engines such as the Rolls-Royce Buzzard used monobloc heads for reliability.
The last engines to make widespread use of monobloc cylinder heads were large air-cooled aircraft radial engines, such as the Wasp Major. These have individual cylinder barrels, so access is less restricted than on an inline engine with a monobloc crankcase and cylinders, as most modern engines are. As they have high specific power and require great reliability, the advantages of the monobloc remained attractive.
General aviation engines such as Franklin, Continental, and Lycoming are still manufactured new and continue to use monobloc individual cylinders, although Franklin uses a removable sleeve. A combination of materials are used in their construction, such as steel for the cylinder barrels and aluminum alloys for the cylinder heads to save weight. Common rebuilding techniques include chrome plating the inside of the cylinder barrels in a "cracked" finish that mimics the "cross-hatched" finish normally created by typical cylinder honing. Older engines operated on unleaded automotive gasoline as allowed by supplemental type certificates approved by the FAA may require more frequent machining replacement of valves and seats. Special tools are used to maintain valve seats in these cylinders. Non-destructive testing should be performed to look for flaws that may have arisen during extreme use, engine damage from sudden propeller stoppage or extended engine operation at every overhaul or rebuild.
Historically the difficulties of machining, and maintaining a monobloc cylinder head were and continue to be a severe drawback. As head gaskets became able to handle greater heat and pressure, the technique went out of use. It is almost unknown today, but has found a few niche uses, as the technique of monobloc cylinder heads was adopted by the Japanese model engine manufacturer Saito Seisakusho for their glow fueled and spark ignition model four-stroke engines for RC aircraft propulsion needs.
Monobloc cylinders also continue to be used on small 2 stroke-cycle engines for power equipment used to maintain lawns and gardens, such as string trimmers, tillers and leaf blowers.
Cylinder block
Casting technology at the dawn of the internal combustion engine could reliably cast either large castings, or castings with complex internal cores to allow for water jackets, but not both simultaneously. Most early engines, particularly those with more than four cylinders, had their cylinders cast as pairs or triplets of cylinders, then bolted to a single crankcase.
As casting techniques improved, the entire cylinder block of four, six or even eight cylinders could be cast as one. This was a simpler construction, thus less expensive to manufacture, and the communal water jacket permitted closer spacing between cylinders. This also improved the mechanical stiffness of the engine, against bending and the increasingly important torsional twist, as cylinder numbers and engine lengths increased. In the context of aircraft engines, the non-monobloc precursor to monobloc cylinders was a construction where the cylinders (or at least their liners) were cast as individuals, and the outer water jacket was applied later from copper or steel sheet. This complex construction was expensive, but lightweight, and so it was only widely used for aircraft.
V engines remained with a separate block casting for each bank. The complex ducting required for inlet manifolds between the banks were too complicated to cast otherwise. For economy, a few engines, such as the V12 Pierce-Arrow, were designed to use identical castings for each bank, left and right. Some rare engines, such as the Lancia 22½° narrow-angle V12 of 1919, did use a single block casting for both banks.
A monobloc engine was used in 1936's Series 60. It was designed to be the company's next-generation powerplant at reduced cost from the 353 and Cadillac V16. The monobloc's cylinders and crankcase were cast as a single unit, and it used hydraulic valve lifters for durability. This design allowed the creation of the mid-priced Series 60 line.
Modern cylinders, except for air-cooled engines and some V engines, are now universally cast as a single cylinder block, and modern heads are nearly always separate components.
Crankcase
As casting improved and cylinder blocks became a monobloc, it also became possible to cast both cylinders and crankcase as one unit. The main reason for this was to improve stiffness of the engine construction, reducing vibration and permitting higher speeds.
Most engines, except some V engines, are now a monobloc of crankcase and cylinder block.
Modern engines - Combined block, head and crankcase
Light-duty consumer-grade Honda GC-family small engines use a headless monobloc design where the cylinder head, block, and half the crankcase share the same casting, termed 'uniblock' by Honda. One reason for this, apart from cost, is to produce an overall lower engine height. Being an air-cooled OHC design, this is possible thanks to current aluminum casting techniques and lack of complex hollow spaces for liquid cooling. The valves are vertical, so as to permit assembly in this confined space. On the other hand, performing basic repairs becomes so time-consuming that the engine can be considered disposable. Commercial-duty Honda GX-family engines (and their many popular knock-offs) have a more conventional design of a single crankcase and cylinder casting, with a separate cylinder head.
Honda produces many other head-block-crankcase monoblocs under a variety of different names, such as the GXV-series. They may all be externally identified by a gasket which bisects the crankcase on an approximately 45° angle.
References
Engine technology
Piston engine configurations | Monobloc engine | [
"Technology"
] | 1,896 | [
"Engine technology",
"Engines"
] |
26,130,054 | https://en.wikipedia.org/wiki/Mladen%20Bestvina | Mladen Bestvina (born 1959) is a Croatian-American mathematician working in the area of geometric group theory. He is a Distinguished Professor in the Department of Mathematics at the University of Utah.
Life and career
Mladen Bestvina is a three-time medalist at the International Mathematical Olympiad (two silver medals in 1976 and 1978 and a bronze medal in 1977). He received a B. Sc. in 1982 from the University of Zagreb. He obtained a PhD in Mathematics in 1984 at the University of Tennessee under the direction of John Walsh. He was a visiting scholar at the Institute for Advanced Study in 1987-88 and again in 1990–91. Bestvina had been a faculty member at UCLA, and joined the faculty in the Department of Mathematics at the University of Utah in 1993. He was appointed a Distinguished Professor at the University of Utah in 2008.
Bestvina received the Alfred P. Sloan Fellowship in 1988–89 and a Presidential Young Investigator Award in 1988–91.
Bestvina gave an invited address at the International Congress of Mathematicians in Beijing in 2002, and gave a plenary lecture at virtual ICM 2022.
He also gave a Unni Namboodiri Lecture in Geometry and Topology at the University of Chicago.
Bestvina served as an Editorial Board member for the Transactions of the American Mathematical Society and as an associate editor of the Annals of Mathematics. Currently he is an editorial board member for Duke Mathematical Journal, Geometric and Functional Analysis, Geometry and Topology, the Journal of Topology and Analysis, Groups, Geometry and Dynamics, Michigan Mathematical Journal, Rocky Mountain Journal of Mathematics, and Glasnik Matematicki.
In 2012 he became a fellow of the American Mathematical Society. Since 2012, he has been a correspondent member of the HAZU (Croatian Academy of Science and Art).
Mathematical contributions
A 1988 monograph of Bestvina gave an abstract topological characterization of universal Menger compacta in all dimensions; previously only the cases of dimension 0 and 1 were well understood. John Walsh wrote in a review of Bestvina's monograph: 'This work, which formed the author's Ph.D. thesis at the University of Tennessee, represents a monumental step forward, having moved the status of the topological structure of higher-dimensional Menger compacta from one of "close to total ignorance" to one of "complete understanding".'
In a 1992 paper Bestvina and Feighn obtained a Combination Theorem for word-hyperbolic groups. The theorem provides a set of sufficient conditions for amalgamated free products and HNN extensions of word-hyperbolic groups to again be word-hyperbolic. The Bestvina–Feighn Combination Theorem became a standard tool in geometric group theory and has had many applications and generalizations (e.g.).
Bestvina and Feighn also gave the first published treatment of Rips' theory of stable group actions on R-trees (the Rips machine) In particular their paper gives a proof of the Morgan–Shalen conjecture that a finitely generated group G admits a free isometric action on an R-tree if and only if G is a free product of surface groups, free groups and free abelian groups.
A 1992 paper of Bestvina and Handel introduced the notion of a train track map for representing elements of Out(Fn). In the same paper they introduced the notion of a relative train track and applied train track methods to solve the Scott conjecture, which says that for every automorphism α of a finitely generated free group Fn the fixed subgroup of α is free of rank at most n. Since then train tracks became a standard tool in the study of algebraic, geometric and dynamical properties of automorphisms of free groups and of subgroups of Out(Fn). Examples of applications of train tracks include: a theorem of Brinkmann proving that for an automorphism α of Fn the mapping torus group of α is word-hyperbolic if and only if α has no periodic conjugacy classes; a theorem of Bridson and Groves that for every automorphism α of Fn the mapping torus group of α satisfies a quadratic isoperimetric inequality; a proof of algorithmic solvability of the conjugacy problem for free-by-cyclic groups; and others.
Bestvina, Feighn and Handel later proved that the group Out(Fn) satisfies the Tits alternative, settling a long-standing open problem.
In a 1997 paper Bestvina and Brady developed a version of discrete Morse theory for cubical complexes and applied it to study homological finiteness properties of subgroups of right-angled Artin groups. In particular, they constructed an example of a group which provides a counter-example to either the Whitehead asphericity conjecture or to the Eilenberg−Ganea conjecture, thus showing that at least one of these conjectures must be false. Brady subsequently used their Morse theory technique to construct the first example of a finitely presented subgroup of a word-hyperbolic group that is not itself word-hyperbolic.
Selected publications
Bestvina, Mladen, Characterizing k-dimensional universal Menger compacta. Memoirs of the American Mathematical Society, vol. 71 (1988), no. 380
Bestvina, Mladen; Feighn, Mark, Bounding the complexity of simplicial group actions on trees. Inventiones Mathematicae, vol. 103 (1991), no. 3, pp. 449–469
Bestvina, Mladen; Mess, Geoffrey, The boundary of negatively curved groups. Journal of the American Mathematical Society, vol. 4 (1991), no. 3, pp. 469–481
Mladen Bestvina, and Michael Handel, Train tracks and automorphisms of free groups. Annals of Mathematics (2), vol. 135 (1992), no. 1, pp. 1–51
M. Bestvina and M. Feighn, A combination theorem for negatively curved groups. Journal of Differential Geometry, Volume 35 (1992), pp. 85–101
M. Bestvina and M. Feighn. Stable actions of groups on real trees. Inventiones Mathematicae, vol. 121 (1995), no. 2, pp. 287 321
Bestvina, Mladen and Brady, Noel, Morse theory and finiteness properties of groups. Inventiones Mathematicae, vol. 129 (1997), no. 3, pp. 445–470
Mladen Bestvina, Mark Feighn, and Michael Handel. The Tits alternative for Out(Fn). I. Dynamics of exponentially-growing automorphisms. Annals of Mathematics (2), vol. 151 (2000), no. 2, pp. 517–623
Mladen Bestvina, Mark Feighn, and Michael Handel. The Tits alternative for Out(Fn). II. A Kolchin type theorem. Annals of Mathematics (2), vol. 161 (2005), no. 1, pp. 1–59
Bestvina, Mladen; Bux, Kai-Uwe; Margalit, Dan, The dimension of the Torelli group. Journal of the American Mathematical Society, vol. 23 (2010), no. 1, pp. 61–105
See also
Real tree
Artin group
Out(Fn)
Train track map
Pseudo-Anosov map
Word-hyperbolic group
Mapping class group
Whitehead conjecture
References
External links
Mladen Bestvina, personal webpage, Department of Mathematics, University of Utah
Living people
1959 births
Group theorists
Topologists
20th-century American mathematicians
21st-century American mathematicians
University of Utah faculty
Yugoslav emigrants to the United States
Croatian mathematicians
University of Tennessee alumni
Institute for Advanced Study visiting scholars
Faculty of Science, University of Zagreb alumni
Fellows of the American Mathematical Society
People from Osijek
International Mathematical Olympiad participants
Sloan Research Fellows | Mladen Bestvina | [
"Mathematics"
] | 1,598 | [
"Topologists",
"Topology"
] |
26,130,615 | https://en.wikipedia.org/wiki/Unibranch%20local%20ring | In algebraic geometry, a local ring A is said to be unibranch if the reduced ring Ared (obtained by quotienting A by its nilradical) is an integral domain, and the integral closure B of Ared is also a local ring. A unibranch local ring is said to be geometrically unibranch if the residue field of B is a purely inseparable extension of the residue field of Ared. A complex variety X is called topologically unibranch at a point x if for all complements Y of closed algebraic subsets of X there is a fundamental system of neighborhoods (in the classical topology) of x whose intersection with Y is connected.
In particular, a normal ring is unibranch. The notions of unibranch and geometrically unibranch points are used in some theorems in algebraic geometry. For example, there is the following result:
Theorem Let X and Y be two integral locally noetherian schemes and a proper dominant morphism. Denote their function fields by K(X) and K(Y), respectively. Suppose that the algebraic closure of K(Y) in K(X) has separable degree n and that is unibranch. Then the fiber has at most n connected components. In particular, if f is birational, then the fibers of unibranch points are connected.
In EGA, the theorem is obtained as a corollary of Zariski's main theorem.
References
Algebraic geometry
Commutative algebra | Unibranch local ring | [
"Mathematics"
] | 319 | [
"Fields of abstract algebra",
"Commutative algebra",
"Algebraic geometry"
] |
26,131,119 | https://en.wikipedia.org/wiki/Constructible%20sheaf | In mathematics, a constructible sheaf is a sheaf of abelian groups over some topological space X, such that X is the union of a finite number of locally closed subsets on each of which the sheaf is a locally constant sheaf. It has its origins in algebraic geometry, where in étale cohomology constructible sheaves are defined in a similar way . For the derived category of constructible sheaves, see a section in ℓ-adic sheaf.
The finiteness theorem in étale cohomology states that the higher direct images of a constructible sheaf are constructible.
Definition of étale constructible sheaves on a scheme X
Here we use the definition of constructible étale sheaves from the book by Freitag and Kiehl referenced below. In what follows in this subsection, all sheaves on schemes are étale sheaves unless otherwise noted.
A sheaf is called constructible if can be written as a finite union of locally closed subschemes such that for each subscheme of the covering, the sheaf is a finite locally constant sheaf. In particular, this means for each subscheme appearing in the finite covering, there is an étale covering such that for all étale subschemes in the cover of , the sheaf is constant and represented by a finite set.
This definition allows us to derive, from Noetherian induction and the fact that an étale sheaf is constant if and only if its restriction from to is constant as well, where is the reduction of the scheme . It then follows that a representable étale sheaf is itself constructible.
Of particular interest to the theory of constructible étale sheaves is the case in which one works with constructible étale sheaves of Abelian groups. The remarkable result is that constructible étale sheaves of Abelian groups are precisely the Noetherian objects in the category of all torsion étale sheaves (cf. Proposition I.4.8 of Freitag-Kiehl).
Examples in algebraic topology
Most examples of constructible sheaves come from intersection cohomology sheaves or from the derived pushforward of a local system on a family of topological spaces parameterized by a base space.
Derived pushforward on P1
One nice set of examples of constructible sheaves come from the derived pushforward (with or without compact support) of a local system on . Since any loop around is homotopic to a loop around we only have to describe the monodromy around and . For example, we can set the monodromy operators to be
where the stalks of our local system are isomorphic to . Then, if we take the derived pushforward or of for we get a constructible sheaf where the stalks at the points compute the cohomology of the local systems restricted to a neighborhood of them in .
Weierstrass family of elliptic curves
For example, consider the family of degenerating elliptic curves
over . At this family of curves degenerates into a nodal curve. If we denote this family by then
and
where the stalks of the local system are isomorphic to . This local monodromy around of this local system around can be computed using the Picard–Lefschetz formula.
References
Seminar notes
References
Algebraic geometry
Sheaf theory | Constructible sheaf | [
"Mathematics"
] | 685 | [
"Mathematical structures",
"Fields of abstract algebra",
"Topology",
"Sheaf theory",
"Category theory",
"Algebraic geometry"
] |
44,817,867 | https://en.wikipedia.org/wiki/Kirkwood%E2%80%93Buff%20solution%20theory | The Kirkwood–Buff (KB) solution theory, due to John G. Kirkwood and Frank P. Buff, links macroscopic (bulk) properties to microscopic (molecular) details. Using statistical mechanics, the KB theory derives thermodynamic quantities from pair correlation functions between all molecules in a multi-component solution. The KB theory proves to be a valuable tool for validation of molecular simulations, as well as for the molecular-resolution elucidation of the mechanisms underlying various physical processes. For example, it has numerous applications in biologically relevant systems.
The reverse process is also possible; the so-called reverse Kirkwood–Buff (reverse-KB) theory, due to Arieh Ben-Naim, derives molecular details from thermodynamic (bulk) measurements. This advancement allows the use of the KB formalism to formulate predictions regarding microscopic properties on the basis of macroscopic information.
The radial distribution function
The radial distribution function (RDF), also termed the pair distribution function or the pair correlation function, is a measure of local structuring in a mixture. The RDF between components and positioned at and , respectively, is defined as:
where is the local density of component relative to component , the quantity is the density of component in the bulk, and is the inter-particle radius vector. Necessarily, it also follows that:
Assuming spherical symmetry, the RDF reduces to:
where is the inter-particle distance.
In certain cases, it is useful to quantify the intermolecular correlations in terms of free energy. Specifically, the RDF is related to the potential of mean force (PMF) between the two components by:
where the PMF is essentially a measure of the effective interactions between the two components in the solution.
The Kirkwood–Buff integrals
The Kirkwood–Buff integral (KBI) between components and is defined as the spatial integral over the pair correlation function:
which in the case of spherical symmetry reduces to:
KBI, having units of volume per molecule, quantifies the excess (or deficiency) of particle around particle .
Derivation of thermodynamic quantities
Two-component system
It is possible to derive various thermodynamic relations for a two-component mixture in terms of the relevant KBI (, , and ).
The partial molar volume of component 1 is:
where is the molar concentration and naturally
The compressibility, , satisfies:
where is the Boltzmann constant and is the temperature.
The derivative of the osmotic pressure, , with respect to the concentration of component 2:
where is the chemical potential of component 1.
The derivatives of chemical potentials with respect to concentrations, at constant temperature () and pressure () are:
or alternatively, with respect to mole fraction:
The preferential interaction coefficient
The relative preference of a molecular species to solvate (interact) with another molecular species is quantified using the preferential interaction coefficient, . Lets consider a solution that consists of the solvent (water), solute, and cosolute. The relative (effective) interaction of water with the solute is related to the preferential hydration coefficient, , which is positive if the solute is "preferentially hydrated". In the Kirkwood-Buff theory frame-work, and in the low concentration regime of cosolutes, the preferential hydration coefficient is:
where is the molarity of water, and W, S, and C correspond to water, solute, and cosolute, respectively.
In the most general case, the preferential hydration is a function of the KBI of solute with both solvent and cosolute. However, under very simple assumptions and in many practical examples, it reduces to:
So the only function of relevance is .
References
External links
Thermodynamic equations
Statistical mechanics | Kirkwood–Buff solution theory | [
"Physics",
"Chemistry"
] | 789 | [
"Thermodynamic equations",
"Statistical mechanics",
"Equations of physics",
"Thermodynamics"
] |
44,818,019 | https://en.wikipedia.org/wiki/Green%20engineering | Green engineering approaches the design of products and processes by applying financially and technologically feasible principles to achieve one or more of the following goals: (1) decrease in the amount of pollution that is generated by a construction or operation of a facility, (2) minimization of human population exposure to potential hazards (including reducing toxicity), (3) improved uses of matter and energy throughout the life cycle of the product and processes, and (4) maintaining economic efficiency and viability. Green engineering can be an overarching framework for all design disciplines.
History
The concept of green engineering began between 1966 and 1970 during the Organization for Economic Cooperation and Development under the name: "The Ten Ecological Commandments for Earth Citizens". The idea was expressed visually as the following cycle starting with the first commandment and ending with the tenth:
Respect the laws of nature
Learn as responsible earth citizens from the wisdom of nature
Do not reduce plurality richness, abundance of living species
Do not pollute
Face earth-responsibility every day for our children and our children's children
Follow the principle of nature precaution/sustainability in all economic activities!
Act as you speak!
Prefer small clever and intelligent problem solutions, including rational and emotional intelligence factors
Information about environmental damage belongs to mankind - not (only) to privilieged big business
Listen carefully [to] what your own body tells you about [the] impact of your very personal social and natural environment upon your wellbeing
The idea was then presented by Peter Menke-Glückert at the United Nations Educational, Scientific, and Cultural Conference at Paris in 1968. These principles are similar to the Principles of Green Engineering in that each individual has an intrinsic responsibility to uphold these values. The Ten Ecological Commandments for Earth Citizens is thought by Dr. Płotka-Wasylka to have influenced The Principles of Green Engineering, which has been said to imply that all engineers have a duty to uphold sustainable values and practices when creating new processes.
Green engineering is a part of a larger push for sustainable practices in the creation of products such as chemical compounds. This movement is more widely known as green chemistry, and has been headed since 1991 by Paul Anastas and John C. Warner. Green chemistry, being older than green engineering, is a more researched field of study and began in 1991 with the creation of the 12 Principles of Green Chemistry.
12 Principles of Green Engineering
On May 19, 2003, Paul Anastas along with his future wife, Julie Zimmerman created the 12 Principles of Green Engineering. This expanded upon the 12 Principles of Green Chemistry to not only include the guidelines for what an environmentally conscious chemical should be in theory, but also what steps should be followed to create an environmentally conscious alternative to the chemical. Environmentally conscious thought can be applied to engineering disciplines such as civil and mechanical engineers when considering practices with negative environmental impacts, such as concrete hydration. These principles still were centered around chemical processes, with about half pertaining to engineers. There are many ways that both the 12 Principles of Green Chemistry and 12 Principles of Green Engineering interact, referred to by Tse-Lun Chen et al. as "cross connections". Every one Principle of Green Engineering has one or more corresponding "cross connections" to Principles of Green Chemistry. For example, principle 1 of green engineering is "Inherent Rather than Circumstantial", which has cross connections to principles 1, 3, and 8 of green chemistry.
9 Principles of Green Engineering
On May 19, 2003, during a conference at the Sandestin Resort in Florida, a group consisting of about 65 chemists, engineers, and government officials met to create a narrowed down set of green principles relating to engineers and engineering. After 4 days of debating and proposals, the Sandestin Declaration was created. This declaration established the 9 Principles of Green Engineering, which narrowed down the focus to processes engineers can abide by, with a focus on designing processes and products with the future in mind. The resulting 9 Principles were later supported and recognized by The U.S. Environmental Protection Agency, National Science Foundation, Department of Energy (Los Alamos National Laboratory), and the ACS Green Chemistry institute®.
Sustainable Engineering
"Sustainable engineering" and "green engineering" are terms that are often used interchangeably. The main difference between the two being that green engineering is "optimized to minimize negative impacts without exhausting resources available in the natural environment" and sustainable engineering is "more directed toward building a better future for the next generations". The idea of sustainable development became intertwined with engineering and chemistry early in the 21st century. One often cited book that brought the idea of sustainable development to engineers was the publishing of: "Sustainable Infrastructure: Principles into Practice", written by Charles Ainger and Richard Fenner.
Principles
Green engineering follows nine guiding principles:
Engineer processes and products holistically, use systems analysis and integrate environmental impact assessment tools.
Conserve and improve natural ecosystems while protecting human health and well-being.
Use life-cycle thinking in all engineering activities.
Ensure that all material and energy inputs and outputs are as inherently safe and benign as possible.
Minimize the depletion of natural resources.
Prevent waste.
Develop and apply engineering solutions while being cognizant of local geography, aspirations, and cultures.
Create engineering solutions beyond current or dominant technologies; improve, innovate, and invent (technologies) to achieve sustainability.
Actively engage communities and stakeholders in development of engineering solutions.
In 2003, The American Chemical Society introduced a new list of twelve principles:
Inherent Rather Than Circumstantial – Designers need to strive to ensure that all materials and energy inputs and outputs are as inherently nonhazardous as possible.
Prevention Instead of Treatment – It is better to prevent waste than to treat or clean up waste after it is formed.
Design for Separation – Separation and purification operations should be designed to minimize energy consumption and materials use.
Maximize Efficiency – Products, processes, and systems should be designed to maximize mass, energy, space, and time efficiency.
Output-Pulled Versus Input-Pushed – Products, processes, and systems should be "output pulled" rather than "input pushed" through the use of energy and materials.
Conserve Complexity – Embedded entropy and complexity must be viewed as an investment when making design choices on recycling, reuse, or beneficial disposition.
Durability Rather Than Immortality – Targeted durability, not immortality, should be a design goal.
Meet Need, Minimize Excess – Design for unnecessary capacity or capability (e.g., "one size fits all") solutions should be considered a design flaw.
Minimize Material Diversity – Material diversity in multicomponent products should be minimized to promote disassembly and value retention.
Integrate Material and Energy Flows – Design of products, processes, and systems must include integration and interconnectivity with available energy and materials flows.
Design for Commercial "Afterlife" – Products, processes, and systems should be designed for performance in a commercial "afterlife."
Renewable Rather Than Depleting – Material and energy inputs should be renewable rather than depleting.
Systems approach
Many engineering disciplines engage in green engineering. This includes sustainable design, life cycle analysis (LCA), pollution prevention, design for the environment (DfE), design for disassembly (DfD), and design for recycling (DfR). As such, green engineering is a subset of sustainable engineering.
Green engineering involves four basic approaches to improve processes and products to make them more efficient from an environmental standpoint.
Waste reduction;
Materials management;
Pollution prevention; and,
Product enhancement.
Green engineering approaches design from a systematic perspective which integrates numerous professional disciplines. In addition to all engineering disciplines, green engineering includes land use planning, architecture, landscape architecture, and other design fields, as well as the social sciences(e.g. to determine how various groups of people use products and services. Green engineers are concerned with space, the sense of place, viewing the site map as a set of fluxes across the boundary, and considering the combinations of these systems over larger regions, e.g. urban areas.
The life cycle analysis is an important green engineering tool, which provides a holistic view of the entirety of a product, process or activity, encompassing raw materials, manufacturing, transportation, distribution, use, maintenance, recycling, and final disposal. Assessing its life cycle should yield a complete picture of the product. The first step in a life cycle assessment is to gather data on the flow of a material through an identifiable society. Once the quantities of various components of such a flow are known, the important functions and impacts of each step in the production, manufacture, use, and recovery/disposal are estimated. In sustainable design, engineers must optimize for variables that give the best performance in temporal frames.
The system approach employed in green engineering is similar to value engineering (VE). Daniel A. Vallero has compared green engineering to be a form of VE because both systems require that all elements and linkages within the overall project be considered to enhance the value of the project. Every component and step of the system must be challenged. Ascertaining overall value is determined not only be a project's cost-effectiveness, but other values, including environmental and public health factors. Thus, the broader sense of VE is compatible with and can be identical to green engineering, since VE is aimed at effectiveness, not just efficiency, i.e. a project is designed to achieve multiple objectives, without sacrificing any important values. Efficiency is an engineering and thermodynamic term for the ratio of an input to an output of energy and mass within a system. As the ratio approaches 100%, the system becomes more efficient. Effectiveness requires that efficiencies be met for each component, but also that the integration of components lead to an effective, multiple value-based design.
Green engineering is also a type of concurrent engineering, since tasks must be parallelized to achieve multiple design objectives.
Implementation
Ionic liquids
An ionic liquid can be described simply as a salt in a liquid state, exhibiting triboelectric properties which allow it to be used as a lubricant. Traditional solvents are composed of oils or synthetic compounds, like fluorocarbons which, when airborne, can act as a greenhouse gas. Ionic liquids are nonvolatile and have high thermal stability and, as Lei states, "They present a “greener” alternative to standard solvents". Ionic liquids can also be used for carbon dioxide capture or as a component in bioethanol production in the gasification process.
Ceramic tiles
Ceramic tile production is typically an energy and water-intensive process. Ceramic tile milling is similar to cement milling for concrete, where there is both a dry and wet milling process. Wet milling typically produces a higher quality tile at a higher cost of energy and water, while dry milling would produce a lower quality material at a lower cost.
See also
Civil engineering
Ecotechnology
Environmental engineering science
Environmental engineering
Environmental technology
Exposure assessment
Green building
Greening
Hazard (risk)
Life cycle assessment
Process engineering
Risk assessment
Sustainable engineering
Systems engineering
References
External links
U.S. EPA (2014). "Green Engineering". http://www.epa.gov/oppt/greenengineering/pubs/basic_info.html
Vanegas, Jorge (2004). "Sustainable Engineering Practice – An introduction". ASCE publishing.
Antalya, Turkey, (1997). "XI World Forestry Congress", (Volume 3, topic 2), retrieved from http://www.fao.org/forestry/docrep/wfcxi/publi/v3/T12E/2-3.HTM
http://www.sustainableengineeringdesign.com
https://engineering.purdue.edu/EEE/Research/Areas/sustainable.html
https://archive.today/20030526060813/http://www7.caret.cam.ac.uk/sustainability.htm
https://web.archive.org/web/20130926012810/http://www.aaas.org/programs/international/caip/events/fall97/sanio.html
Environmental engineering
Sustainable technologies | Green engineering | [
"Chemistry",
"Engineering"
] | 2,502 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
41,609,211 | https://en.wikipedia.org/wiki/Chromium%28II%29%20bromide | Chromium(II) bromide is the inorganic compound with the chemical formula CrBr2. Like many metal dihalides, CrBr2 adopts the "cadmium iodide structure" motif, i.e., it features sheets of octahedral Cr(II) centers interconnected by bridging bromide ligands. It is a white solid that dissolves in water to give blue solutions that are readily oxidized by air.
Synthesis and reactions
It can be prepared by reduction of chromium(III) bromide with hydrogen gas for 6–10 hours at 350-400 °C, cogenerating hydrogen bromide:
2CrBr3 + H2 → 2CrBr2 + 2HBr
Treatment of chromium powder with concentrated hydrobromic acid gives a blue hydrated chromium(II) bromide, which can be converted to a related acetonitrile complex.
Cr + nH2O + 2HBr → CrBr2(H2O)n + H2
References
Chromium(II) compounds
Bromides
Metal halides | Chromium(II) bromide | [
"Chemistry"
] | 229 | [
"Inorganic compounds",
"Inorganic compound stubs",
"Salts",
"Bromides",
"Metal halides"
] |
41,610,115 | https://en.wikipedia.org/wiki/Ayrton%20shunt | The Ayrton shunt or universal shunt is a high-resistance shunt used in galvanometers to increase their range without changing the damping.
The circuit is named after its inventor William E. Ayrton. Multirange ammeters that use this technique are more accurate than those using a make-before-break switch. Also it will eliminate the possibility of having a meter without a shunt which is a serious concern in make-before-break switches.
The selector switch changes the amount of resistance in parallel with Rm (meter resistance). The voltage drop across parallel branches is always equal. When all resistances are placed in parallel with Rm maximum sensitivity of ammeter is reached.
Ayrton shunt is rarely used for currents above 10 amperes.
m1 = I1/Im , m2 = I2/Im, m3 = I3/Im
References
Sources
Electrical circuits
Electrical meters | Ayrton shunt | [
"Technology",
"Engineering"
] | 186 | [
"Measuring instruments",
"Electronic engineering",
"Electrical engineering",
"Electrical meters",
"Electrical circuits"
] |
41,611,688 | https://en.wikipedia.org/wiki/Haematopoietic%20system | The haematopoietic system (spelled hematopoietic system in American English) is the system in the body involved in the creation of the cells of blood.
Structure
Stem cells
Haematopoietic stem cells (HSCs) reside in the medulla of the bone (bone marrow) and have the unique ability to give rise to all of the different mature blood cell types and tissues. HSCs are self-renewing cells: when they differentiate, at least some of their daughter cells remain as HSCs, so the pool of stem cells is not depleted. This phenomenon is called asymmetric division. The other daughters of HSCs (myeloid and lymphoid progenitor cells) can follow any of the other differentiation pathways that lead to the production of one or more specific types of blood cell, but cannot renew themselves. The pool of progenitors is heterogeneous and can be divided into two groups; long-term self-renewing HSC and only transiently self-renewing HSC, also called short-terms. This is one of the main vital processes in the body.
Development
In developing embryos, blood formation occurs in aggregates of blood cells in the yolk sac, called blood islands. As development progresses, blood formation occurs in the spleen, liver and lymph nodes. When bone marrow develops, it eventually assumes the task of forming most of the blood cells for the entire organism. However, maturation, activation, and some proliferation of lymphoid cells occurs in the spleen, thymus, and lymph nodes. In children, haematopoiesis occurs in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum.
Function
Haematopoiesis (from Greek αἷμα, "blood" and ποιεῖν "to make"; also hematopoiesis in American English; sometimes also haemopoiesis or hemopoiesis) is the formation of blood cellular components. All cellular blood components are derived from haematopoietic stem cells. In a healthy adult person, approximately 1011–1012 new blood cells are produced daily in order to maintain steady state levels in the peripheral circulation.
All blood cells are divided into three lineages.
Red blood cells, also called erythrocytes, are the oxygen-carrying cells. Erythrocytes are functional and are released into the blood. The number of reticulocytes, immature red blood cells, gives an estimate of the rate of erythropoiesis.
Lymphocytes are the cornerstone of the adaptive immune system. They are derived from common lymphoid progenitors. The lymphoid lineage is composed of T-cells, B-cells and natural killer cells. This is lymphopoiesis.
Cells of the myeloid lineage, which include granulocytes, megakaryocytes and macrophages, are derived from common myeloid progenitors, and are involved in such diverse roles as innate immunity and blood clotting. This is myelopoiesis.
Clinical significance
Stem cell transplant
A stem cell transplant is a transplant intended to replace the progenitor haematopoietic stem cells
Haematopoietic stem cell transplantation (HSCT) is the transplantation of multipotent haematopoietic stem cells, usually derived from bone marrow, peripheral blood, or umbilical cord blood. It may be autologous (the patient's own stem cells are used), allogeneic (the stem cells come from a donor) or syngeneic (from an identical twin).
It is most often performed for patients with certain cancers of the blood or bone marrow, such as multiple myeloma or leukemia. In these cases, the recipient's immune system is usually destroyed with radiation or chemotherapy before the transplantation. Infection and graft-versus-host disease are major complications of allogeneic HSCT.
Haematopoietic stem cell transplantation remains a dangerous procedure with many possible complications; it is reserved for patients with life-threatening diseases. As survival following the procedure has increased, its use has expanded beyond cancer to autoimmune diseases and hereditary skeletal dysplasias; notably malignant infantile osteopetrosis and mucopolysaccharidosis.
References
Blood
Hematopoietic stem cells
Bone marrow
Immune system
Respiration | Haematopoietic system | [
"Biology"
] | 937 | [
"Immune system",
"Organ systems"
] |
41,615,741 | https://en.wikipedia.org/wiki/Borromean%20nucleus | In nuclear physics, a Borromean nucleus is an atomic nucleus comprising three bound components in which any subsystem of two components is unbound. This has the consequence that if one component is removed, the remaining two comprise an unbound resonance, so that the original nucleus is split into three parts.
The name is derived from the Borromean rings, a system of three linked rings in which no pair of rings is linked.
Examples of Borromean nuclei
Many Borromean nuclei are light nuclei near the nuclear drip lines that have a nuclear halo and low nuclear binding energy. For example, the nuclei , , and each possess a two-neutron halo surrounding a core containing the remaining nucleons. These are Borromean nuclei because the removal of either neutron from the halo will result in a resonance unbound to one-neutron emission, whereas the dineutron (the particles in the halo) is itself an unbound system. Similarly, is a Borromean nucleus with a two-proton halo; both the diproton and are unbound.
Additionally, is a Borromean nucleus comprising two alpha particles and a neutron; the removal of any one component would produce one of the unbound resonances or .
Several Borromean nuclei such as and the Hoyle state (an excited resonance in ) play an important role in nuclear astrophysics. Namely, these are three-body systems whose unbound components (formed from ) are intermediate steps in the triple-alpha process; this limits the rate of production of heavier elements, for three bodies must react nearly simultaneously.
Borromean nuclei consisting of more than three components can also exist. These also lie along the drip lines; for instance, and are five-body Borromean systems with a four-neutron halo. It is also possible that nuclides produced in the alpha process (such as and ) may be clusters of alpha particles, having a similar structure to Borromean nuclei.
, the heaviest known Borromean nucleus was . Heavier species along the neutron drip line have since been observed; these and undiscovered heavier nuclei along the drip line are also likely to be Borromean nuclei with varying numbers (3, 5, 7, or more) of bodies.
See also
Efimov state
Three-body force
Halo nucleus
References
Nuclear physics | Borromean nucleus | [
"Physics"
] | 485 | [
"Nuclear physics"
] |
24,723,477 | https://en.wikipedia.org/wiki/Valienamine | Valienamine is a C-7 aminocyclitol found as a substructure of pseudooligosaccharides such as the antidiabetic drug acarbose and the antibiotic validamycin. It can be found in Actinoplanes species.
It is an intermediate formed by microbial degradation of validamycins.
References
External links
Valienamine on chemblink.com
Cyclitols
Cyclohexenes
Amines | Valienamine | [
"Chemistry"
] | 98 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
24,724,562 | https://en.wikipedia.org/wiki/Draper%20point | In physics, the Draper point is the approximate temperature above which almost all solid materials visibly glow as a result of black-body radiation. It was established at by John William Draper in 1847.
Bodies at temperatures just below the Draper point radiate primarily in the infrared range and emit negligible visible light. The value of the Draper point can be calculated using Wien's displacement law: the peak frequency (in hertz) emitted by a blackbody relates to temperature as follows:
where
is the Boltzmann constant,
is the Planck constant,
is temperature (in kelvins).
Substituting the Draper point into this equation produces a frequency of 83 THz, or a wavelength of 3.6 μm, which is well into the infrared and completely invisible to the human eye. However, the leading edge of the blackbody radiation curve extends, at a small fraction of peak intensity, to the near-infrared and far-red (approximately the range 0.7–1 μm), which are weakly visible as a dull red.
According to the Stefan–Boltzmann law, a black body at the Draper point emits 23 kW of radiation per square meter, almost exclusively infrared.
See also
Incandescence
References
Heat transfer
Thermodynamics
Electromagnetic radiation | Draper point | [
"Physics",
"Chemistry",
"Mathematics"
] | 260 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Electromagnetic radiation",
"Radiation",
"Thermodynamics",
"Dynamical systems"
] |
24,728,109 | https://en.wikipedia.org/wiki/Superflare | Superflares are very strong explosions observed on stars with energies up to ten thousand times that of typical solar flares. The stars in this class satisfy conditions which should make them solar analogues, and would be expected to be stable over very long time scales.
The original nine candidates were detected by a variety of methods. No systematic study was possible until the launch of the Kepler space telescope, which monitored a very large number of solar-type stars with very high accuracy for an extended period. This showed that a small proportion of stars had violent outbursts. In many cases there were multiple events on the same star. Younger stars were more likely to flare than old ones, but strong events were seen on stars as old as the Sun.
The flares were initially explained by postulating giant planets in very close orbits, such that the magnetic fields of the star and planet were linked. The orbit of the planet would warp the field lines until the instability released magnetic field energy as a flare. However, no such planet has showed up as a Kepler transit and this theory has been abandoned.
All superflare stars show quasi-periodic brightness variations interpreted as very large starspots carried round by rotation. Spectroscopic studies found spectral lines that were clear indicators of chromospheric activity associated with strong and extensive magnetic fields. This suggests that superflares only differ in scale from solar flares.
Attempts have been made to detect past solar superflares from nitrate concentrations in polar ice, from historical observations of auroras, and from those radioactive isotopes that can be produced by solar energetic particles. Although three events and a few candidates have been found in the carbon-14 records in tree rings, it is not possible to associate them definitely with superflare events.
Solar superflares would have drastic effects, especially if they occurred as multiple events. Since they can occur on stars of the same age, mass and composition as the Sun this cannot be ruled out, but no indication of solar superflares have been found for the past ten millennia. However, solar-type superflare stars are very rare and are magnetically much more active than the Sun; if solar superflares do occur, it may be in well-defined episodes that occupy a small fraction of its time.
Superflare stars
A superflare star is not the same as a flare star, which usually refers to a very late spectral type red dwarf. The term is restricted to large transient events on stars that satisfy the following conditions:
The star is in spectral class F8 to G8
It is on or near the main sequence
It is single or part of a very wide binary
It is not a rapid rotator
It is not exceedingly young
Essentially such stars may be regarded as solar analogues.
Originally nine superflare stars were found, some of them similar to the Sun.
Original superflare candidates
The original paper identified nine candidate objects from a literature search:
Type gives the spectral classification including spectral type and luminosity class.
V (mag) means the normal apparent visual magnitude of the star.
EW(He) is the equivalent width of the 5875.6Å He I D3 line seen in emission.
The observations vary for each object. Some are X-ray measurements, others are visual, photographic, spectroscopic or photometric. The energies for the events vary from 2 × 1033 to 2 × 1038 ergs.
Kepler discoveries
The Kepler spacecraft is a space observatory designed to find planets by the method of transits. A photometer continually monitors the brightness of 150,000 stars in a fixed area of the sky (in the constellations of Cygnus, Lyra and Draco) to detect changes in brightness caused by planets passing in front of the stellar disc. More than 90,000 are G-type stars (similar to the Sun) on or near the main sequence. The observed area corresponds to about 0.25% of the entire sky. The photometer is sensitive to wavelengths of 400–865 nm: the entire visible spectrum and part of the infrared. The photometric accuracy achieved by Kepler is typically 0.01% (0.1 mmag) for 30 minute integration times of 12th magnitude stars.
G-type stars
The high accuracy, the large number of stars observed and the long period of observation make Kepler ideal for detecting superflares. Studies published in 2012 and 2013 involved 83,000 stars over a period of 500 days (much of the data analysis was carried out with the help of five first-year undergraduates). The stars were selected from the Kepler Input Catalog to have Teff, the effective temperature, between 5100 and 6000 K (the solar value is 5750 K) to find stars of similar spectral class to the Sun, and the surface gravity log g > 4.0 to eliminate sub-giants and giants. The spectral classes range from F8 to G8. The integration time was 30 minutes in the original study. The studies found 1,547 superflares on 279 solar-type stars. The most intense events increased the brightness of the stars by 30% and had an energy of 1036 ergs. White-light flares on the Sun change the brightness by about 0.01%, and the strongest flares have a visible-light energy of about 1032 ergs. (All energies quoted are in the optical bandpass and so are lower limits since some energy is emitted at other wavelengths.) Most events were much less energetic than this: flare amplitudes below 0.1% of the stellar value and energies of 2 × 1033 ergs were detectable with the 30 minute integration. The flares had a rapid rise followed by an exponential decay on a time scale of 1–3 hours. The most powerful events corresponded to energies ten thousand times greater than the largest flares observed on the Sun. Some stars flared very frequently: one star showed 57 events in 500 days, a rate of one every nine days. For the statistics of flares, the number of flares decreased with energy E roughly as E−2, a similar behaviour to solar flares. The duration of the flare increased with its energy, again in accordance with the solar behaviour.
Some Kepler data is taken at one minute sampling, though inevitably with lower accuracy. Using this data, on a smaller sample of stars, reveals flares that are too brief for reliable detection with 30-min integrations, allowing detection of events as low as 1032 ergs, comparable with the brightest flares on the Sun. The occurrence frequency as a function of energy remains a power law E−n when extended to lower energies, with n around 1.5. At this time resolution some superflares show multiple peaks with separations of 100 to 1000 seconds, again comparable to the pulsations in solar flares. The star KIC 9655129 showed two periods, of 78 and 32 minutes, suggesting magnetohydrodynamic oscillations in the flaring region. These observations suggest that superflares are different only in scale and not in type to solar flares.
Superflare stars show a quasi-periodic brightness variation, which is interpreted as evidence of starspots carried around by stellar rotation. This allows an estimate of the rotation period of the star; values range from less than one day up to tens of days (the value for the Sun is 25 days). On the Sun, radiometer monitoring from satellites shows that large sunspots can reduce the brightness by up to 0.2%. In superflare stars the most common brightness variations are 1–2%, though they can be as great as 7–8%, suggesting that the area of the starspots can be very much larger than anything found on the Sun. In some cases the brightness variations can be modelled by only one or two large starspots, though not all cases are so simple. The starspots could be groups of smaller spots or single giant spots.
Flares are more common in stars with short periods. However, the energy of the largest flares is not related to the period of rotation. Stars with larger variations also have much more frequent flares; there is as well a tendency for them to have more energetic flares. Large variations can be found on even the most slowly rotating stars: one star had a rotation period of 22.7 days and variations implying spot coverage of 2.5% of the surface, over ten times greater than the maximum solar value. By estimating the size of the starspots from the amplitude variation, and assuming solar values for the magnetic fields in the spots (1000 G), it is possible to estimate the energy available: in all cases there is enough energy in the field to power even the largest flares observed. This suggests that superflares and solar flares have essentially the same mechanism.
In order to determine whether superflares can occur on the Sun, it is important to narrow the definition of Sun-like stars. When the temperature range is divided into stars with Teff above and below 5600 K (early and late G-type stars), stars of lower temperature are about twice as likely to show superflare activity as those in the solar range and those that do so have more flares: the occurrence frequency of flares (number per star per year) is about five times as great in the late-type stars. It is well known that both the rotation rate and the magnetic activity of a star decrease with age in G-type stars. When flare stars are divided into fast and slow rotators, using the rotation period estimated from brightness variations, there is a general tendency for the fastest-rotating (and presumably youngest) stars to show a greater probability of activity: in particular, stars rotating in less than 10 days are 20–30 times more likely to have activity. Nevertheless, 44 superflares were found on 19 stars with similar temperatures to the Sun and periods greater than 10 days (out of 14,000 such stars examined); four superflares with energies in the range 1-5 × 1033 ergs were detected on stars rotating more slowly than the Sun (of about 5000 in the sample). The distribution of flares with energy has the same shape for all classes of star: although Sun-like stars are less likely to flare, they have the same proportion of very energetic flares as younger and cooler stars.
K and M type stars
Kepler data have also been used to search for flares on stars of later spectral types than G. A sample of 23,253 stars with effective temperature Teff less than 5150 K and surface gravity log g > 4.2, corresponding to main sequence stars later than K0V, was examined for flares over a time period of 33.5 days. 373 stars were identified as having obvious flares. Some stars had only one flare, while others showed as many as fifteen. The strongest events increased the brightness of the star by 7-8%. This is not radically different from the peak brightness of flares on G-type stars; however, since K and M stars are less luminous than type G, this suggests that flares on these stars are less energetic. Comparing the two classes of stars studied, it seems that M stars flare more frequently than K stars but the duration of each flare tends to be shorter. It is not possible to draw any conclusions about the relative proportion of G and K type stars showing superflares, or about the frequency of flares on those stars that do show such activity, since the flare detection algorithms and criteria in the two studies are quite different.
Most (though not all) of the K and M stars show the same quasi-periodic brightness variations as the G stars. There is a tendency for more energetic flares to occur on more variable stars; however flare frequency is only weakly related to variability.
Hot Jupiters as an explanation
When superflares were originally discovered on solar-type stars it was suggested that these eruptions may be produced by the interaction of the star's magnetic field with the magnetic field of a gas-giant planet orbiting so close to the primary that the magnetic fields were linked. Rotation or orbital motion would wind up the magnetic fields until a reconfiguration of the fields would cause an explosive release of energy. The RS Canum Venaticorum variables are close binaries, with orbital periods between 1 and 14 days, in which the primary is an F- or G-type main sequence star, and with strong chromospheric activity at all orbital phases. These systems have brightness variations attributed to large starspots on the primary; some show large flares thought to be caused by magnetic reconnection. The companion is close enough to spin up the star by tidal interactions.
A gas giant, however, would not be massive enough to spin up a star. But the magnetic field of a nearby exoplanet may close stellar magnetic field lines, resulting in a star rotating at a faster speed for a given age due to less magnetic braking. It is unclear whether these effects would be measurable in a star's properties, such as rotation speed and chromospheric activity. Kepler discovered a number of closely orbiting gas giants, known as hot Jupiters; some studies of two such systems may have indicated periodic variations of the chromospheric activity of the primary synchronized to the orbital period of the companion.
Not all planetary transits can be detected by Kepler, since the planetary orbit may be out of the line of sight to Earth. However, the hot Jupiters orbit so close to the primary that the chance of a transit is about 10%. If superflares were caused by close planets the 279 flare stars discovered should have about 28 transiting companions; none of them actually showed evidence of transits, effectively excluding this explanation. Similarly, a search for superflares at radio wavelengths that may be caused by hot Jupiters interacting magnetically with their stars failed to detect any such flares.
Spectroscopic observations of superflare stars
Spectroscopic studies of superflares allow their properties to be determined in more detail, in the hope of detecting the cause of the flares. The first studies were made using the high dispersion spectrograph on the Subaru Telescope in Hawaii. Some 50 apparently solar-type stars, known from the Kepler observations to show superflare activity, have been examined in detail. Of these, only 16 showed evidence of being visual or spectroscopic binaries; these were excluded since close binaries are frequently active, while in the case of visual binaries there is the chance of activity taking place on the companion. Spectroscopy allows accurate determinations of the effective temperature, the surface gravity and the abundance of elements beyond helium ('metallicity'); most of the 34 single stars proved to be main sequence stars of spectral type G and similar composition to the Sun. Since properties such as temperature and surface gravity change over the lifetime of a star, stellar evolution theory allows an estimate of the age of a star: in most cases the age appeared to be above several hundred million years. This is important since very young stars are known to be much more active. Nine of the stars conformed to the narrower definition of solar-type given above, with temperatures greater than 5600 K and rotation periods longer than 10 days; some had periods above 20 or even 30 days. Only five of the 34 could be described as fast rotators.
Observations from LAMOST have been used to measure chromospheric activity of 5,648 solar-like stars in the Kepler field, including 48 superflare stars. These observations show that superflare stars are generally characterized by larger chromospheric emissions than other stars, including the Sun. However, superflare stars with activity levels lower than, or comparable to, the Sun do exist, suggesting that solar flares and superflares most likely share the same origin. The very large ensemble of solar-like stars included in this study enables detailed and robust estimates of the relation between chromospheric activity and the occurrence of superflares.
All the stars showed the quasi-periodic brightness variations, ranging from 0.1% to nearly 10%, interpreted as the rotation of large starspots. When large spots exist on a star, the activity level of the chromosphere becomes high; in particular, large chromospheric plages form around sunspot groups. The intensities of certain solar and stellar lines generated in the chromosphere, particularly the lines of ionised calcium (Ca II) and the Hα line of hydrogen, are known to be indicators of magnetic activity. Observations of the Ca lines in stars of similar age to the Sun even show cyclic variations reminiscent of the 11-year solar cycle. By observing certain infrared lines of Ca II for the 34 superflare stars it was possible to estimate their chromospheric activity. Measurements of the same lines at points within an active region on the Sun, together with simultaneous measurements of the local magnetic field, show that there is a general relation between field and activity.
Although the stars show a clear correlation between rotational speed and activity, this does not exclude activity on slowly rotating stars: even stars as slow as the Sun can have high activity. All the superflare stars observed had more activity than the Sun, implying larger magnetic fields. There is also a correlation between the activity of a star and its brightness variations (and therefore the starspot coverage): all stars with large amplitude variations showed high activity.
Knowing the approximate area covered by starspots from the size of the variations, and the field strength estimated from the chromospheric activity, allows an estimate of the total energy stored in the magnetic field; in all cases there was enough energy stored in the field to account for even the largest superflares. Both the photometric and the spectroscopic observations are consistent with the theory that superflares are different only in scale from solar flares, and can be accounted for by the release of magnetic energy in active regions very much larger than those on the Sun. Nevertheless, these regions can appear on stars with masses, temperatures, compositions, rotation speeds and ages similar to the Sun.
Detecting past superflares on the Sun
Since stars apparently similar to the Sun can produce superflares it is natural to ask if the Sun itself can do so, and to try to find evidence that it has done in the past. Large flares are invariably accompanied by energetic particles, and these particles produce effects if they reach the Earth. The Carrington Event of 1859, the largest flare of which we have direct observation, produced global auroral displays extending close to the equator. Energetic particles can produce chemical changes in the atmosphere, which can be permanently recorded in the polar ice. Fast protons generate distinctive isotopes, particularly carbon-14, which can be taken up and preserved by living creatures.
Nitrate concentrations in polar ice
When solar energetic particles reach the Earth's atmosphere they cause ionisation that creates nitric oxide (NO) and other reactive nitrogen species, which then precipitate out in the form of nitrates. Since all energetic charged particles are deflected to a greater or lesser extent by the geomagnetic field, they enter preferentially at the polar latitudes; since high latitudes also contain permanent ice, it is natural to look for the nitrate signature of particle events in ice cores. A study of a Greenland ice core extending back to 1561 AD achieved resolutions of 10 or 20 samples a year, allowing in principle the detection of single events. Precise dates (within one or two years) can be achieved by counting annual layers in the cores, checked by identification of deposits associated with known volcanic eruptions. The core contained an annual variation of nitrate concentration, accompanied by a number of 'spikes' of different amplitudes. The strongest of these in the entire record was dated to within a few weeks of the Carrington event of 1859. However, other events can produce nitrate spikes, including biomass burning which also produces enhanced ammonium concentrations. An examination of fourteen ice cores from Antarctic and Arctic regions showed large nitrate spikes: however, none of them were dated to 1859 other than the one already mentioned, and that one seems to be too soon after the Carrington event and too short to be explained by it. All such spikes were associated with ammonium and other chemical indicators of combustion. The conclusion is that nitrate concentrations cannot be used as indicators of historic solar activity.
Single events from cosmogenic isotopes
When energetic protons enter the atmosphere they create isotopes by reactions with the major components; the most important of these is carbon-14 (14C), which is created when secondary neutrons react with nitrogen. 14C, which has a half-life of 5,730 years, reacts with oxygen to form carbon dioxide which is taken up by plants; dating wood by its 14C content was the original basis of radiocarbon dating. If wood of known age is available the process can be reversed. Measuring the 14C content and using the half-life allows estimation of the content when the wood was formed. The growth rings of trees show patterns, caused by various environmental factors: dendrochronology uses these growth rings of trees, compared across overlapping sequences, to establish accurate dates. Applying this method shows that atmospheric 14C does indeed vary with time, due to solar activity. This is the basis of the carbon dating calibration curve. It can also be used to detect any peaks in production caused by solar flares, if those flares create enough energetic particles to produce a measurable increase in 14C.
An examination of the calibration curve, which has a time resolution of five years, showed three intervals in the last 3,000 years in which 14C increased significantly. On the basis of this two Japanese cedar trees were examined with a resolution of a single year, and showed an increase of 1.2% in AD 774, some twenty times larger than anything expected from the normal solar variation. This peak steadily diminished over the next few years. The result was confirmed by studies of German oak, bristlecone pine from California, Siberian larch, and Kauri wood from New Zealand. All determinations agreed on both the time and amplitude of the effect. In addition, measurements of coral skeletons from the South China Sea showed substantial variations in 14C over a few months around the same time; however, the date could only be established to within a period of ±14 years around 783 AD.
Carbon-14 is not the only isotope that can be produced by energetic particles. Beryllium-10 (10Be, half-life 1.4 million years) is also formed from nitrogen and oxygen, and deposited in polar ice. However, 10Be deposition can be strongly related to local weather and shows extreme geographic variability; it is also more difficult to assign dates. Nevertheless, a 10Be increase during the 770s was found in an ice core from the Antarctic, though the signal was less striking because of the lower time resolution (several years); another smaller increase was seen in Greenland. When data from two sites in North Greenland and one in the West Antarctic, all taken with a one-year resolution, were compared they all showed a strong signal: the time profile also matched well with the 14C results (within the uncertainty of dating for the 10Be data). Chlorine-36 (36Cl, half-life 301 thousand years) can be produced from argon and deposited in polar ice; because argon is a minor atmospheric constituent the abundance is low. The same ice cores which showed 10Be also provided increases of 36Cl, though with a resolution of five years a detailed match was impossible.
A second event in AD 993/4 has also been found from 14C in tree rings, but at a lower intensity, and another event was found for 660 BC This event also produced measurable increases in 10Be and 36Cl in Greenland ice cores.
If these events are presumed to be produced by energetic particles from large flares, it is not easy to estimate the particle energy in the flare or compare it with known events. The Carrington event does not appear in the cosmogenic records, and neither did any other large particle event that has been directly observed. The flux of particles must be estimated by calculating production rates of radiocarbon, and then modelling the behaviour of the CO2 once it has entered the carbon cycle; the fraction of the created radiocarbon taken up by trees depends to some extent on that cycle. The energetic particle spectrum of a solar flare varies considerably between events; one with a 'hard' spectrum, with more high-energy protons, will be more efficient at producing a 14C increase. The most powerful flare which also had a hard spectrum that has been observed instrumentally took place in February 1956 (the beginning of nuclear testing obscures any possible effects in the 14C record); it has been estimated that if a single flare were responsible for the AD 774/5 event it would need to be 25–50 times more powerful than this. One active region on the Sun may produce several flares over its lifetime, and the effects of such a sequence would be aggregated over the one-year period covered by a single 14C measurement; however, the total effect would still be ten times greater than anything observed in a similar period in modern times.
Solar flares are not the only possibility for producing the cosmogenic isotopes. A long or short gamma-ray burst has been initially proposed as a possible cause of the AD 774/5 event. However, this explanation turned out to be very unlikely, and the current paradigm is that these events are caused by extreme solar particle events.
Historical records
A number of attempts have been made to find additional evidence supporting the superflare interpretation of the isotope peak around AD 774/5 by studying historical records. The Carrington event produced auroral displays as far south as Caribbean and Hawaii, corresponding to geomagnetic latitude of about 22°; if the event of 774/5 corresponded to an even more energetic flare there should have been a global auroral event.
Usoskin et al. cited references to aurorae in Chinese chronicles for AD 770 (twice), 773 and 775. They also quote a "red cross" in the sky in AD 773, 774, or 776 from the Anglo-Saxon Chronicle; "inflamed shields" or "shields burning with a red colour" seen in the sky over Germany in AD 776 recorded in the Royal Frankish Annals; "fire in heaven" seen in Ireland in AD 772; and an apparition in Germany in AD 773 interpreted as riders on white horses. The enhanced solar activity around the 14C increase is confirmed by the Chinese auroral record on AD 776 January 12, as detailed by Stephenson et al. The Chinese records describe more than ten bands of white lights "like the spread silk" stretching across eight Chinese constellations; the display lasted for several hours. The observations, made during the Tang dynasty, were made from the capital Chang'an.
Nevertheless, there are a number of difficulties involved when trying to link the 14C results to historical chronicles. Tree ring dates may be in error because there is no discernible ring for a year (unusually cold weather), or two rings (a second growth during a warm autumn). If the cold weather were global, following a large volcanic eruption, it is conceivable that the effects could also be global: the apparent 14C date may not always match the chronicles.
For the isotope peak in AD 993/994 studied by Hayakawa et al. surveyed contemporary historical documents show clustering auroral observations in late 992, while their relationship with the isotope peak is still under discussion.
General solar activity in the past
Superflares seem to be associated with a general high level of magnetic activity. As well as looking for individual events, it is possible to examine the isotope records to find the activity level in the past and identify periods when it may have been much higher than now. Lunar rocks provide a record unaffected by geomagnetic shielding and transport processes. Both non-solar cosmic rays and solar particle events can create isotopes in rocks, and both are affected by solar activity. The cosmic rays are much more energetic and penetrate more deeply, and can be distinguished from the solar particles which affect the outer layers. Several different radioisotopes can be produced with very different half-lives; the concentration of each may be regarded as representing an average of particle flux over its half-life. Since fluxes must be converted into isotope concentrations by simulations there is a certain model-dependence here. The data are consistent with the view that the flux of energetic solar particles with energies above a few tens of MeV has not changed over periods ranging from five thousand to five million years. Of course, a period of intense activity over a time scale short with respect to the half-life would not be detected.
14C measurements, even with low time resolution, can indicate the state of solar activity over the last 11,000 years until about 1900. Although radiocarbon dating has been applied as far back as 50,000 years, during the deglaciations at the start of the Holocene the biosphere and its carbon uptake changed dramatically making estimation before this impractical; after about 1900 the Suess effect and nuclear bomb-tests makes interpretation difficult. 10Be concentrations in stratified polar ice cores provide an independent measure of activity. Both measures agree reasonably with each other and with the Zurich sunspot number of the last two centuries. As an additional check, it is possible to recover the isotope Titanium-44 (44Ti, half-life 60 years) from meteorites; this provides a measurement of activity that is not affected by changes in transport process or the geomagnetic field. Although it is limited to about the last two centuries, it is consistent with all but one of the 14C and 10Be reconstructions and confirms their validity. The energetic flare events discussed above are rare; on long time scales (significantly more than a year), the radiogenic particle flux is dominated by cosmic rays. The inner Solar System is shielded by the general magnetic field of the Sun, which is strongly dependent on the time within a cycle and the strength of the cycle. The result is that times of powerful activity show up as decreases in the concentrations of all these isotopes. Because cosmic rays are also influenced by the geomagnetic field, difficulties in reconstructing this field set a limit to the accuracy of the reconstructions.
The 14C reconstruction of activity over the last 11,000 years shows no period significantly higher than the present; in fact, the general level of activity in the second half of the 20th century was the highest since 9000 BC. In particular, the activity in the period around the AD 774 14C event (averaged over decades) was somewhat lower than the long-term average, while the AD 993 event coincided with a small minimum. A more detailed scrutiny of the period AD 731 to 825, combining several 14C datasets of one- and two-year resolution with auroral and sunspot accounts does show a general increase in solar activity (from a low level) after about AD 733, reaching its highest level after 757 and remaining high in the 760s and 770s; there were several aurorae around this time, and even a low-latitude aurora in China.
Effects of a hypothetical solar superflare
The effect of the sort of superflare apparently found on the original nine candidate stars would be catastrophic for the Earth and would cause serious damage to the atmosphere and to life. Although it would not be nearly as powerful as a gamma ray burst. It also would leave traces on the Solar System; the event on S Fornacis for example involved an increase in the stars' luminosity by a factor of about twenty. Thomas Gold suggested that the glaze on the top surface of certain lunar rocks might be caused by a solar outburst involving a luminosity increase of over a hundred times for 10 to 100 seconds at some time in the last 30,000 years. Apart from the terrestrial effects, this would cause local ice melting followed by refreezing as far out as the moons of Jupiter. There is no evidence of superflares on this scale having occurred in the Solar System.
Superflares have also been suggested as a solution to the faint young Sun paradox.
Probability of a hypothetical solar superflare
An estimate based on the original Kepler photometric studies suggested a frequency on solar-type stars (early G-type and rotation period more than 10 days) of once every 800 years for an energy of 1034 erg and every 5000 years at 1035 erg. One-minute sampling provided statistics for less energetic flares and gave a frequency of one flare of energy 1033 erg every 500 to 600 years for a star rotating as slowly as the Sun; this would be rated as X100 on the solar flare scale. This is based on a straightforward comparison of the number of stars studied with the number of flares observed. An extrapolation of the empirical statistics for solar flares to an energy of 1035 erg suggests a frequency of one in 10,000 years.
However, this does not match the known properties of superflare stars. Such stars are extremely rare in the Kepler data; one study showed only 279 such stars in 31,457 studied, a proportion below 1%; for older stars this fell to 0.25%. Also, about half of the stars which were active showed repeating flares: one had as many as 57 events in 500 days. Concentrating on solar-type stars, the most active averaged one flare every 100 days; the frequency of superflare occurrence in the most active Sun-like stars is 1000 times larger than that of the general average for such stars. This suggests that such behaviour is not present throughout a star's lifetime, but is confined to episodes of extraordinary activity. This is also suggested by the clear relation between the magnetic activity of a star and its superflare activity; in particular, superflare stars are much more active (based on starspot area) than the Sun.
There is no evidence for any flare greater than the one observed by Carrington in 1859 and the November 2003 flare from active region 10486 (both about 4×1032 erg, or 1/2,000 of the largest superflares) in the last 200 years. Although larger events from the 14C record ca. 775 AD is unambiguously identified as a solar event, its association to the flare energy is unclear, and it is unlikely to exceed 1032 erg.
The more energetic superflares seem to be ruled out by energetic considerations for the Sun, which suggest it is not capable of a flare of more than 1034 ergs. A calculation of the free energy in magnetic fields in active regions that could be released as flares gives an upper limit of around 3×1032 erg suggesting the most energetic a super flare can be is about that of the Carrington event.
Some stars have a magnetic field 5 times that of the Sun and rotate much faster and these could theoretically have a flare of up to 1034 ergs. This could explain some superflares at the lower end of the range. To go higher than this may require an anti-solar rotation curve - one in which the polar regions rotate faster than the equatorial regions.
See also
Flare star
Solar flare
Stellar magnetic field
References
Stellar phenomena | Superflare | [
"Physics"
] | 7,269 | [
"Physical phenomena",
"Stellar phenomena"
] |
24,728,612 | https://en.wikipedia.org/wiki/Carbene%20analog | Carbene analogs in chemistry are carbenes with the carbon atom replaced by another chemical element. Just as regular carbenes they appear in chemical reactions as reactive intermediates and with special precautions they can be stabilized and isolated as chemical compounds. Carbenes have some practical utility in organic synthesis but carbene analogs are mostly laboratory curiosities only investigated in academia. Carbene analogs are known for elements of group 13, group 14, group 15 and group 16.
Group 13 carbene analogs
In group 13 elements the boron carbene analog is called a borylene or boranylidene.
Group 14 carbene analogs
The heavier group 14 carbenes are silylenes, R2Si:, germylenes R2Ge: (example diphosphagermylene), stannylenes R2Sn: and plumbylenes R2Pb:, collectively known as metallylenes and regarded as monomers for polymetallanes. The oxidation state for these compounds is +2 and stability increases with principal quantum number (moving down a row in the periodic table). This makes dichloroplumbylene PbCl2 and dichlorostannylene SnCl2 stable ionic compounds although they exist as polymers or ion pairs.
Group 14 carbene analogs do not form hybrid orbitals but instead retain (ns)2(np)2 electron configuration due to the increasing s p gap for larger elements. Two electrons remain in an s-orbital and therefore their compounds have exclusively singlet ground states and not the triplet ground state which can be observed in carbenes depending on the substituents. The s-orbital (lone pair) is inert and the vacant p-orbital is very reactive. Stable group 14 carbenes require stabilization of this p-orbital which is usually accomplished by coordination of a Cp* ligand or coordination to nitrogen, oxygen or phosphorus containing ligands, although stabilization can be achieved through steric protection alone.
General methods for the synthesis of carbon-substituted (aryl or alkyl) metallylenes are reduction of M4+ species or substitution reactions at M2+ halides. Stable metallylenes require bulky substituents in order to prevent nucleophilic attack of the metal center at the p-orbital. Examples of these bulky substituents in R2M: are mesityl, Dis (di(trimethylsilyl)methyl) and adamantyl groups. With insufficient steric shielding the metallylene will form a dimer or a polymer. The first isolable dialkylgermylene was synthesised in 1991:
Me5C5GeCl + LiCH(Si(Me3))2 → Me5C5GeCH(Si(Me3))2
Me5C5GeCH(Si(Me3))2 + LiC(Si(Me3))3 → (SiMe3)3CGeCH(Si(Me3))2
Stable also require bulky ligands:
Ge[N(SiMe3)2]2 + 2 LiC5H3(C10H7)2 → Ge[LiC5H3(C10H7)2]2
The C-M-C bond angle in metallylenes is less than 120° confirming hybridization other than sp2. The higher p-character for the C-MII bond compared to the C-MIV bond is reflected in its slightly higher bond length.
N-heterocyclic silylenes are known to be stable for months and have been studied extensively.
Group 15 carbene analogs
In the group 15 elements the neutral nitrogen carbene analog (RN) is called a nitrene. The phosphorus analog is a phosphinidene. There are charged group 15 carbene analogs as well, most notably phosphenium ions (R2P+) which are isolobal with (hetero-)carbenes possessing a singlet ground state.
Group 16 carbene analogs
Carbene analogs of group 16 elements have been first reported in 2009. Sulfur, selenium and tellurium dications have been found to be stabilized by the diiminopyridine ligand DIMPY. For example, the reaction product of triflate S(Otf)2 and (2,6-diisopropylphenyl)2DIMPY at -78 °C results in an air-stable dicationic sulfur compound with a naked S2+ atom coordinated by three nitrogen atoms by dative bonds.
References
Carbenes | Carbene analog | [
"Chemistry"
] | 976 | [
"Inorganic compounds",
"Functional groups",
"Octet-deficient functional groups",
"Organic compounds",
"Carbenes"
] |
24,729,735 | https://en.wikipedia.org/wiki/C8H3ClO3 | The molecular formula C8H3ClO3 (molar mass: 182.56 g/mol, exact mass: 181.9771 u) may refer to:
3-Chlorophthalic anhydride
4-Chlorophthalic anhydride
Molecular formulas | C8H3ClO3 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
35,990,343 | https://en.wikipedia.org/wiki/Methacrylonitrile | Methacrylonitrile (or 2-Methylprop-2-enenitrile), MeAN in short, is a chemical compound that is an unsaturated aliphatic nitrile, widely used in the preparation of homopolymers, copolymers, elastomers, and plastics and as a chemical intermediate in the preparation of acids, amides, amines, esters, and other nitriles. MeAN is also used as a replacement for acrylonitrile in the manufacture of an acrylonitrile/butadiene/styrene-like polymer. It is a clear and colorless (to slightly yellow) liquid, that has a bitter almond smell.
It is toxic by ingestion, inhalation, and skin absorption.
Exposure and regulation
Since MeAN is present in polymeric coating materials as found in many everyday use items, humans are exposed to it by skin absorption. Aside from this there is an occupational exposure, and low levels of MeAN are also present in the smoke of unfiltered cigarettes made from air-cured or flue-cured tobaccos.
Due to the toxicity of MeAN, the U.S. Department of Health & Human Services has limited the concentration of methacrylonitrile-derived polymer in resinous and polymeric coating materials to 41%. Its use in food packaging is further limited to 0.5 mg per square inch of food-contact surface, and only 50 ppm, or 0.005% MeAN is permitted in chloroform-soluble coating components in water containers (21 CFR, § 175.300). A time-weighted average (TWA) threshold limit value of 1 ppm (3 mg/m3) for MeAN exposure was adopted by the American Conference of Governmental Industrial Hygienists.
The National Cancer Institute (USA) nominated MeAN for research because of its potential for human exposure, the common features shared with the known carcinogen acrylonitrile and the shortcoming of knowledge in toxicity and carcinogenicity of MeAN.
Structure and reactivity
Methacrylonitrile is an acrylonitrile (AN) with an additional CH3 group on the second carbon. Polymerization does not require a catalyst and happens rapidly in the absence of a stabilizer.
Because of its double bond, additional reactions are possible with biological molecules. The extra methyl group of MeAN lessens the electron-withdrawing effect caused by the nitrile so that reactions that form negative charge on the alpha carbon are faster with AN as the reactant. Inversely, reactions that form a positive charge on said carbon (i.g. Cytochrome-P450 oxidation of the double bond), are faster with MeAN as the reactant. As a result, in metabolism, MeAN conjugates less with glutathione (GSH) than AN, and is activated more easily.
Synthesis
Poly(acrylonitrile) is generally made via emulsion or solution polymerization. The commercial product can be stabilized by the addition of 50 ppm hydroquinone monoethyl ether. The polymerization of MeAN is carried out in tetrahydrofuran (THF) with the disodium salt of polyethylene oxide (PEO). MeAN is also commercially produced by the vapor-phase reaction of isobutylene with ammonia and oxygen in the presence of a catalyst. Acetonitrile, hydrogen cyanide and acrolein are known by-products. It is used in the preparation of homo- and copolymers, elastomers, coatings and plastics. It can be used as a replacement for acrylonitrile in similar reactions. MeAN can also be synthesized by dehydration of methacrylamide or from isopropylene oxide and ammonia.
Reactions
MeAN can undergo electropolymerization, if it is submitted to electroreduction at metallic cathodes in an organic anhydrous medium, for example; acetonitrile. There are two types of polymers that can be obtained at the end of the synthesis; a physisorbed polymer and a grafted polymer. The mechanism accounting for the non-grafted polymer is pretty well understood: it proceeds via the formation of a radical anion (the product of reduction of the vinylic monomer), which dimerizes in solution because of a radical–radical coupling mechanism (RRC) to deliver a di-anion acting as the initiator of a polymerization reaction in solution.
Metabolism
There are different metabolizing pathways for methacrylonitrile, that are elaborated here:
First of all, methacrylonitrile can be directly conjugated with GSH, which leads to the formation of S-(2-cyanopropyl) GSH, which can be metabolized to N-acetyl-S- (2-cyanopropyl) cysteine (NACPC), which can be excreted in the urine.
Due to this, glutathione is depleted to certain degrees after MeAN exposure. After oral exposure to 100 mg/kg MeAN in rats, the maximum depletion was noticed in the liver at 39% of control. This depletion, however, is less than found after AN administration. This is likely because MeAN exists in part bound to red blood cells, and is therefore unavailable for GSH conjugation. Studies using radiolabeled carbon point out that the primary route by which methacrylonitrile left the body is the urine, at 43% of the dose. An additional 18% is excreted in faeces (15%) and exhaled air (2.5%). This means that about 40% of MeAN does not leave the body immediately and is either bound to macromolecules or forms unexcretable conjugates. The red blood cells retained significant amounts of radioactivity: more than 50% of the radioactivity in erythrocytes was detected as covalently bound to hemoglobin and membrane proteins.
Secondly, methacrylonitrile can be metabolised in the liver by CYP2E1 (a Cytochrome-P450 enzyme). This is the most important enzyme for the oxidative metabolism, but also other cytochrome P-450 enzymes may be involved. The oxidative reaction by Cytochrome-P450 enzymes will lead to the formation of an epoxide intermediate, which shows reactivity. This epoxide intermediate is highly unstable and could lead to the formation of cyanide via different transformations. For example, via epoxide hydratase (EH) or via interactions with a sulfhydryl compound, which leads to the formation of a cyanohydrin that could rearrange to an aldehyde and thereby can possibly result in cyanide release. The epoxide can also be conjugated with GSH.
It has been shown that treatment of mice with carbon tetrachloride, which acts on the mixed function oxygenase system, results in much lower cyanide concentrations than controls and greatly reduced toxicity of MeAN, indicating that cyanide production is indeed the main pathway of toxicity, unlike AN, which is more carcinogenic.
More information about toxicity of cyanide see: cyanide poisoning.
Toxicity in humans
Human toxicity has not been well analyzed. Minimum threshold values for odor detections are reported to be at 7 ppm, with the majority of subjects detecting it at higher concentrations of 14 or 24 ppm. At concentrations of 24 ppm incidence of throat, eye and nose irritation occur. No deaths caused by methacrylonitrile poisoning have been reported.
Effects on animals
Inhalation, and oral and dermal administration, of methacrylonitrile can cause acute deaths in animals, often preceded by convulsions and loss of consciousness. Signs of the toxic effects of methacrylonitrile in rats after oral absorption are ataxia, trembling, convulsions, mild diarrhea and irregular breathing. The main cause of toxic effects at lethal (and threshold) levels of MeAN is damage to the central nervous system. This, along with the signs of toxic effects displayed by all tested animals, is consistent with cyanide poisoning. Methacrylonitrile differs herein from acrylonitrile, which does not show cyanide related signs of toxicity.
Cyanide production after exposure to MeAN has been tested, and intravenous injection of MeAN in rabbits results in production of significant levels of cyanide in the blood. In Wistar rats too, toxicity is related to the in vivo liberation of cyanide after exposure to MeAN. The acute toxicity of MeAN can also be antagonized with cyanide antidotes.
A difference in resistance to the lethal effects of MeAN can be noted between species. For inhalation, a 4-hour exposure period gives a LC50 of 328-700 ppm for rats, 88 ppm for guinea pigs, 37 ppm for rabbits and 36 ppm for mice. In dogs acute lethality by inhalation is also noted, although no LC50 has been determined. Oral administration of MeAN has been tested on rats, mice and gerbils, showing a LD50 of 200 mg/kg for rats, 17 mg/kg for mice and 4 mg/kg for gerbils. Skin administration on rabbits causes death at a LC50 of 268 mg/kg. The NOAEL and LOAEL values for rats are determined at 50 mg/kg for NOAEL and 100 mg/kg for LOAEL. This is based on another sign of methacrylonitrile poisoning; urine retention, with 58% of rats showing bladder distention at an administered dose of 100 mg/kg.
Reproductive toxicity was tested in rats, but different outcomes have been reported. Willhite et al. suggest a LOAEL for reproductive effects of 50 mg/kg, while a report by the National Research Council claims no significant reproductive effects have been found.
Lastly, carcinogenic, mutagenic and genotoxic effects have been tested but unlike acrylonitrile, methacrylonitrile does not show signs of any such effects.
References
Monomers
Nitriles | Methacrylonitrile | [
"Chemistry",
"Materials_science"
] | 2,152 | [
"Monomers",
"Nitriles",
"Functional groups",
"Polymer chemistry"
] |
35,992,087 | https://en.wikipedia.org/wiki/Text%2C%20Speech%20and%20Dialogue | Text, Speech and Dialogue (TSD) is an annual conference involving topics on natural language processing and computational linguistics. The meeting is held every September alternating in Brno and Plzeň, Czech Republic.
The first Text, Speech and Dialogue conference took place in Brno in 1998.
Overview
TSD series evolved as a prime forum for interaction between researchers in both spoken and written language processing from all over the world. Proceedings of TSD form a book published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series.
TSD proceedings are regularly indexed by Thomson Reuters Conference Proceedings Citation Index. Moreover, LNAI series are listed in all major citation databases such as DBLP, SCOPUS, EI, INSPEC or COMPENDEX.
The conference is organized by the Faculty of Informatics, Masaryk University, Brno, and the Faculty of Applied Sciences, University of West Bohemia, Plzeň. The conference is supported by the International Speech Communication Association.
Conference topics
Conference topics were:
Corpora and language resources (monolingual, multilingual, text and spoken corpora, large web corpora, disambiguation, specialized lexicons, dictionaries)
Speech recognition (multilingual, continuous, emotional speech, handicapped speaker, out-of-vocabulary words, alternative way of feature extraction, new models for acoustic and language modelling)
Tagging, classification and parsing of text and speech (morphological and syntactic analysis, synthesis and disambiguation, multilingual processing, sentiment analysis, credibility analysis, automatic text labeling, summarization, authorship attribution)
Speech and spoken language generation (multilingual, high fidelity speech synthesis, computer singing)
Semantic processing of text and speech (information extraction, information retrieval, data mining, semantic web, knowledge representation, inference, ontologies, sense disambiguation, plagiarism detection)
Integrating applications of text and speech processing (natural language understanding, question-answering strategies, assistive technologies)
Machine translation (statistical, rule-based, example-based, hybrid, text and speech translation)
Automatic dialogue systems (self-learning, multilingual, question-answering systems, dialogue strategies, prosody in dialogues)
Multimodal Techniques and Modelling (video processing, facial animation, visual speech synthesis, user modeling, emotions and personality modeling)
Past keynote speakers
See also
The list of computer science conferences contains other academic conferences.
References
External links
ACL Member Portal
TSD official website
Perfil-CC rank
Text To Speech Dialogue
Computer science conferences
Academic conferences
Linguistics
Recurring events established in 1998
University of West Bohemia
Masaryk University | Text, Speech and Dialogue | [
"Technology"
] | 536 | [
"Computer science",
"Computer science conferences"
] |
38,858,191 | https://en.wikipedia.org/wiki/Relativistic%20system%20%28mathematics%29 | In mathematics, a non-autonomous system of ordinary differential equations is defined to be a dynamic equation on a smooth fiber bundle over . For instance, this is the case of non-relativistic non-autonomous mechanics, but not relativistic mechanics. To describe relativistic mechanics, one should consider a system of ordinary differential equations on a smooth manifold whose fibration over is not fixed. Such a system admits transformations of a coordinate on depending on other coordinates on . Therefore, it is called the relativistic system. In particular, Special Relativity on the
Minkowski space is of this type.
Since a configuration space of a relativistic system has no
preferable fibration over , a
velocity space of relativistic system is a first order jet
manifold of one-dimensional submanifolds of . The notion of jets of submanifolds
generalizes that of jets of sections
of fiber bundles which are utilized in covariant classical field theory and
non-autonomous mechanics. A first order jet bundle is projective and, following the terminology of Special Relativity, one can think of its fibers as being spaces
of the absolute velocities of a relativistic system. Given coordinates on , a first order jet manifold is provided with the adapted coordinates
possessing transition functions
The relativistic velocities of a relativistic system are represented by
elements of a fibre bundle , coordinated by , where is the tangent bundle of . Then a generic equation of motion of a relativistic system in terms of relativistic velocities reads
For instance, if is the Minkowski space with a Minkowski metric , this is an equation of a relativistic charge in the presence of an electromagnetic field.
See also
Non-autonomous system (mathematics)
Non-autonomous mechanics
Relativistic mechanics
Special relativity
References
Krasil'shchik, I. S., Vinogradov, A. M., [et al.], "Symmetries and conservation laws for differential equations of mathematical physics", Amer. Math. Soc., Providence, RI, 1999, .
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Formulation of Classical and Quantum Mechanics (World Scientific, 2010) ().
Differential equations
Classical mechanics
Theory of relativity | Relativistic system (mathematics) | [
"Physics",
"Mathematics"
] | 487 | [
"Mathematical objects",
"Classical mechanics",
"Equations",
"Differential equations",
"Mechanics",
"Theory of relativity"
] |
38,866,057 | https://en.wikipedia.org/wiki/Embedded%20lens | An embedded lens is a gravitational lens that consists of
a concentration of mass enclosed by (embedded in) a relative void in the surrounding distribution of matter: both the mass and the presence of a void surrounding it will affect the path of light passing through the vicinity. This is in contrast with the simpler, more familiar gravitational lens effect, in which there is no surrounding void. While any shape and arrangement of increased and decreased mass densities will cause gravitational lensing, an ideal embedded lens would be spherical and have an internal mass density matching that of the surrounding region of space. The gravitational influence of an embedded lens differs from that of a simple gravitational lens: light rays will be bent by different angles and embedded lenses of a cosmologically significant scale would affect the spatial evolution (expansion) of the universe.
In a region of homogeneous density, a spherical embedded lens would correspond to the symmetric concentration of a spherical locality's mass into a smaller sphere (or a point) at its center. For a cosmological lens, if the universe has a non-vanishing cosmological constant Λ, then Λ is required to be the same inside and outside of the void.
The metric describing the geometry within the void can be Schwarzschild or Kottler depending on whether there is a non-zero cosmological constant.
Embedding a lens effectively reduces the gravitational potential's range, i.e., partially shields the lensing potential produced by the lens mass condensation.
For example, a light ray grazing the boundary of a Kottler/Schwarzschild void will not be bent by the lens mass condensation (i.e., does not feel the gravitational potential of the embedded lens) and travels along a straight line path in a flat background universe.
Properties
In order to be an analytical solution of the Einstein's field equation, the embedded lens has to satisfy the following conditions:
The mass of the embedded lens (point mass or distributed), should be the same as that from the removed sphere.
The mass distribution within the void should be spherically symmetric.
The cosmological constant should be the same inside and outside of the embedded lens.
History
A universe with inhomogeneities (galaxies, clusters of galaxies, large voids, etc.) represented by spherical voids containing mass condensations described as above is called a Swiss Cheese Universe.
The concept of Swiss Cheese Universe was first invented by Einstein and Straus in 1945.
Swiss Cheese model has been used extensively to model inhomogeneities in the Universe.
For an example, effects of large scale inhomogeneities (such as superclusters) on the observed anisotropy of the temperatures of cosmic microwave background radiation (CMB) was investigated by Rees and Sciama in 1968 using Swiss cheese model (the so-called Rees-Sciama effect).
Distance redshift relation in Swiss cheese universe has been investigated by Ronald Kantowski in 1969, and Dyer & Roeder in the 1970s.
The gravitational lensing theory for a single embedded point mass lens in flat pressure-less Friedman-Lemaître-Robertson-Walker (FLRW) background universe with non-zero cosmological constant has been built by Ronald Kantowski, Bin Chen, and Xinyu Dai in a series papers.
Embedded Lens vs. Classical Gravitational Lens
The key difference between an embedded lens and a traditional lens is that the mass of a standard lens contributes to the mean of the cosmological density, whereas that of an embedded lens does not. Consequently, the gravitational potential of an embedded lens has a finite range, i.e., there is no lensing effect outside of the void. This is different from a standard lens where the gravitational potential of the lens has an infinite range.
As a consequence of embedding, the bending angle, lens equation, image amplification, image shear, and time delay between multiple images of an embedded lens are all different from those of a standard linearized lens. For example, the potential part of the time delay between image pairs, and the weak lensing shear of embedded lens can differ from the standard gravitational lensing theory by more than a few percents.
For an embedded point mass lens, the lens equation to the lowest order can be written
where is the Einstein ring of the standard point mass lens, and is the angular size of the embedded lens. This can be compared with the standard Schwarzschild lens equation
References
Effects of gravity
Theory of relativity
Spacetime
Gravitational lensing | Embedded lens | [
"Physics",
"Mathematics"
] | 920 | [
"Spacetime",
"Vector spaces",
"Space (mathematics)",
"Theory of relativity"
] |
37,418,456 | https://en.wikipedia.org/wiki/Wattle%20%28construction%29 | Wattle is made by weaving flexible branches around upright stakes to form a woven lattice. The wattle may be made into an individual panel, commonly called a hurdle, or it may be formed into a continuous fence. Wattles also form the basic structure for wattle and daub wall construction, where wattling is daubed with a plaster-like substance to make a weather-resistant wall.
History
Evidence of wattle construction was found at Woodcutts Settlement from the British Iron Age, and the Roman Vitruvius wrote about wattles in his book on architecture, De architectura, but the technique goes back to Neolithic times.
Technique
The construction of wattles starts with the uprights, whether they are set into a frame or placed into the ground. Starting at the bottom, flexible willow shoots, called withies, are woven in and out of the uprights (staves).
Wattle and daub
Wattles forms the basis of wattle and daub, a composite building material used for making walls, in which wattle is daubed with a sticky material usually made of some combination of wet soil, clay, sand, animal dung and straw. Wattle and daub has been used for at least 6,000 years, and is still an important construction material in many parts of the world. The technique is similar to modern lath and plaster, a common building material for wall and ceiling surfaces, in which a series of nailed wooden strips are covered with plaster smoothed into a flat surface. Many historic buildings include wattle and daub construction, mostly as infill panels in timber frame construction.
See also
Basket weaving
Lath and plaster
References
External links
How to make wattle fencing step by step
37 Amazing Wattle Fences Around The World
Building materials
Materials
Fences
Perimeter security
Wood products | Wattle (construction) | [
"Physics",
"Engineering"
] | 366 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
37,422,114 | https://en.wikipedia.org/wiki/Titanium%20biocompatibility | Titanium was first introduced into surgeries in the 1950s after having been used in dentistry for a decade prior. It is now the metal of choice for prosthetics, internal fixation, inner body devices, and instrumentation. Titanium is used from head to toe in biomedical implants. One can find titanium in neurosurgery, bone conduction hearing aids, false eye implants, spinal fusion cages, pacemakers, toe implants, and shoulder/elbow/hip/knee replacements along with many more. The main reason why titanium is often used in the body is due to titanium's biocompatibility and, with surface modifications, bioactive surface. The surface characteristics that affect biocompatibility are surface texture, steric hindrance, binding sites, and hydrophobicity (wetting). These characteristics are optimized to create an ideal cellular response. Importantly, patient condition can influence the type of modification necessary, for instance in patients with steatotic liver disases other titanium surface modifications provide better outcomes as compared to patients without fatty liver disease . Some medical implants, as well as parts of surgical instruments are coated with titanium nitride (TiN).
Biocompatibility
Titanium is considered the most biocompatible metal due to its resistance to corrosion from bodily fluids, bio-inertness, capacity for osseointegration, and high fatigue limit. Titanium's ability to withstand the harsh bodily environment is a result of the protective oxide film that forms naturally in the presence of oxygen. The oxide film is strongly adhered, insoluble, and chemically impermeable, preventing reactions between the metal and the surrounding environment.
Osseointegration interaction and proliferation
High energy surfaces induce angiogenesis during osseointegration
It has been suggested that titanium's capacity for osseointegration stems from the high dielectric constant of its surface oxide, which does not denature proteins (like tantalum, and cobalt alloys). Its ability to physically bond with bone gives titanium an advantage over other materials that require the use of an adhesive to remain attached. Titanium implants last longer and much higher forces are required to break the bonds that join them to the body compared to their alternatives.
Surface properties determine osseointegration
The surface properties of a biomaterial play an important role in determining cellular response (cell adhesion and proliferation) to the material. Titanium's microstructure and high surface energy enable it to induce angiogenesis, which assists in the process of osseointegration.
Surface energy
Redox potential
Titanium can have many different standard electrode potentials depending on its oxidation state. Solid titanium has a standard electrode potential of -1.63V. Materials with a greater standard electrode potential are more easily reduced, making them better oxidizing agents. As can be seen in the table below, solid titanium prefers to undergo oxidation, making it a better reducing agent.
Surface coating
Titanium naturally passivates, forming an oxide film that becomes heterogeneous and polarized as a function of exposure time to bodily environments. This leads to the increased adsorption of hydroxyl groups, lipoproteins, and glycolipids over time. The adsorption of these compounds changes how the material interacts with the body and can improve biocompatibility. In titanium alloys such as Ti-Zr and Ti-Nb, zirconium and niobium ions that are liberated due to corrosion are not released into the patient's body, but rather added to the passivation layer. The alloying elements in the passive layer add a degree of biocompatibility and corrosion resistance depending on the original alloy composition of the bulk metal prior to corrosion.
Protein surface concentration, (), is defined by the equation
where QADS is the surface charge density in C⋅cm−2, M is the molar mass of the protein in g⋅mol−1, n is the number of electrons transferred (in this case, one electron for each protonated amino group in the protein), and F is the Faraday constant in C⋅mol−1.
The equation for collision frequency is as follows:
where D = 8.83 × 10−7 cm2⋅s−1 is the diffusion coefficient of the BSA molecule at 310 K, d = 7.2 nm is the "diameter" of the proteinwhich is equivalent to twice the Stokes radius, NA = 6.023 × 1023 mol−1 is the Avogadro constant, and c* = 0.23 g⋅L−1 (3.3 μM) is the critical bulk supersaturation concentration.
Wetting and solid surface
Wetting occurs as a function of two parameters: surface roughness and surface fraction. By increasing wetting, implants can decrease the time required for osseointegration by allowing cells to more readily bind to the surface of an implant. Wetting of titanium can be modified by optimizing process parameters such as temperature, time, and pressure (shown in table below). Titanium with stable oxide layers predominantly consisting of TiO2 result in improved wetting of the implant in contact with physiological fluid.
Adsorption
Corrosion
Mechanical abrasion of the titanium oxide film leads to an increased rate of corrosion.
Titanium and its alloys are not immune to corrosion when in the human body. Titanium alloys are susceptible to hydrogen absorption which can induce precipitation of hydrides and cause embrittlement, leading to material failure. "Hydrogen embrittlement was observed as an in vivo mechanism of degradation under fretting-crevice corrosion conditions resulting in TiH formation, surface reaction and cracking inside Ti/Ti modular body tapers." Studying and testing titanium behavior in the body allow us to avoid malpractices that would cause a fatal breakdown in the implant, like the usage of dental products with high fluoride concentration or substances capable of lowering the pH of the media around the implant.
Adhesion
The cells at the implant interface are highly sensitive to foreign objects. When implants are installed into the body, the cells initiate an inflammatory response which could lead to encapsulation, impairing the functioning of the implanted device.
The ideal cell response to a bioactive surface is characterized by biomaterial stabilization and integration, as well as the reduction of potential bacterial infection sites on the surface. One example of biomaterial integration is a titanium implant with an engineered biointerface covered with biomimetic motifs. Surfaces with these biomimetic motifs have shown to enhance integrin binding and signaling and stem cell differentiation. Increasing the density of ligand clustering also increased integrin binding. A coating consisting of trimers and pentamers increased the bone-implant contact area by 75% when compared to the current clinical standard of uncoated titanium. This increase in area allows for increased cellular integration, and reduces rejection of implanted device. The Langmuir isotherm:
,
where c is the concentration of the adsorbate is the max amount of adsorbed protein, BADS is the affinity of the adsorbate molecules toward adsorption sites. The Langmuir isotherm can be linearized by rearranging the equation to,
This simulation is a good approximation of adsorption to a surface when compared to experimental values. The Langmuir isotherm for adsorption of elements onto the titanium surface can be determined by plotting the know parameters. An experiment of fibrinogen adsorption on a titanium surface "confirmed the applicability of the Langmuir isotherm in the description of adsorption of fibrinogen onto Ti surface."
See also
Biomaterials: mechanical properties
Metals in medicine
Titanium adhesive bonding
References
Titanium
Biomaterials | Titanium biocompatibility | [
"Physics",
"Biology"
] | 1,604 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
37,429,514 | https://en.wikipedia.org/wiki/Demazure%20conjecture | In mathematics, the Demazure conjecture is a conjecture about representations of algebraic groups over the integers made by . The conjecture implies that many of the results of his paper can be extended from complex algebraic groups to algebraic groups over fields of other characteristics or over the integers. showed that Demazure's conjecture (for classical groups) follows from their work on standard monomial theory, and Peter Littelmann extended this to all reductive algebraic groups.
References
Representation theory
Conjectures | Demazure conjecture | [
"Mathematics"
] | 99 | [
"Unsolved problems in mathematics",
"Fields of abstract algebra",
"Conjectures",
"Representation theory",
"Mathematical problems"
] |
5,377,788 | https://en.wikipedia.org/wiki/Semicarbazone | In organic chemistry, a semicarbazone is a derivative of imines formed by a condensation reaction between a ketone or aldehyde and semicarbazide. They are classified as imine derivatives because they are formed from the reaction of an aldehyde or ketone with the terminal -NH2 group of semicarbazide, which behaves very similarly to primary amines.
Formation
For ketones
H2NNHC(=O)NH2 + RC(=O)R → R2C=NNHC(=O)NH2
For aldehydes
H2NNHC(=O)NH2 + RCHO → RCH=NNHC(=O)NH2
For example, the semicarbazone of acetone would have the structure (CH3)2C=NNHC(=O)NH2.
Properties and uses
Some semicarbazones, such as nitrofurazone, and thiosemicarbazones are known to have anti-viral and anti-cancer activity, usually mediated through binding to copper or iron in cells. Many semicarbazones are crystalline solids, useful for the identification of the parent aldehydes/ketones by melting point analysis.
A thiosemicarbazone is an analog of a semicarbazone which contains a sulfur atom in place of the oxygen atom.
See also
Carbazone
Carbazide
Thiosemicarbazone
References
External links
Compounds Containing a N-CO-N-N or More Complex Group
Functional groups
Semicarbazones | Semicarbazone | [
"Chemistry"
] | 329 | [
"Functional groups",
"Semicarbazones"
] |
5,377,793 | https://en.wikipedia.org/wiki/Air%20flow%20meter | An air flow meter is a device similar to an anemometer that measures air flow, i.e. how much air is flowing through a tube. It does not measure the volume of the air passing through the tube, it measures the mass of air flowing through the device per unit time, though Thus air flow meters are simply an application of mass flow meters for the medium of air. Typically, mass air flow measurements are expressed in the units of kilograms per second (kg/s) or feet per minute (fpm), which can be converted to volume measurements of cubic metres per second (cumecs) or cubic feet per minute (cfm).
In automobiles
In industrial environments
Air flow meters monitor air (compressed, forced, or ambient) in many manufacturing processes.
In many industries, preheated air (called "combustion air") is added to boiler fuel just before fuel ignition to ensure the proper ratio of fuel to air for an efficient flame. Pharmaceutical factories and coal pulverizers use forced air as a means to force particle movement or ensure a dry atmosphere. Air flow is also monitored in mining and nuclear environments to ensure the safety of people.
See also
Anemometer
List of sensors
Mass flow sensor
:Category:Engines
:Category:Engine fuel system technology
Thermal mass flow meter
References
External links
Miata.net, Repair broken Air Flow Meter, by Zach Warner, 2 January, 2009
Clarks garage, AFM shop manual, Air Flow Meter (AFM) Operation and Testing, 1998
Auto shop 101, AFM sensor
Spitzer, David W. (1990), Industrial Flow Measurement,
Flow meters
Engine fuel system technology
ja:エアフロメーター | Air flow meter | [
"Chemistry",
"Technology",
"Engineering"
] | 344 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
5,378,527 | https://en.wikipedia.org/wiki/L%20pad | An L pad is a network composed of two impedances that typically resemble the letter capital "L" when drawn on a schematic circuit diagram. It is commonly used for attenuation and for impedance matching.
Speaker L pad
A speaker L pad is a special configuration of rheostats used to control volume while maintaining a constant load impedance on the output of the audio amplifier.
It consists of a parallel and a series rheostat connected in an "L" configuration. As one increases in resistance, the other decreases, thus maintaining a constant impedance, at least in one direction. To maintain constant impedance in both directions, a "T" pad must be used. In loudspeaker systems having a crossover network, it is necessary to maintain impedance to the crossover; this avoids shifting the crossover point.
A constant-impedance load is important in the case of vacuum tube power amplifiers, because such amplifiers do not work as efficiently when terminated into an impedance greatly different than their specified output impedance. Maintaining constant impedance is less important In the case solid state electronics.
In high frequency horns, the L Pad is seen by the crossover, not the amp. L pads may not necessarily use continuously variable rheostats, but instead a multi-position rotating selector switch wired to resistors on the back. Tapped transformers are not L pads; they are autoformers. L pads can also be used at line level, mostly in pro applications.
Audio-frequency (AF) operation
The L pad attenuates the signal by having two separate rheostats connected in an "L" configuration (hence the name). One rheostat is connected in series with the loudspeaker and, as the resistance of this rheostat increases, less power is coupled into the loudspeaker and the loudness of sound produced by the loudspeaker decreases. The second rheostat is connected between the input and ground (earth). As the first rheostat increases in resistance, the second rheostat decreases in resistance, keeping the load impedance (presented at the input of the L pad) constant. The second rheostat usually has a special taper (function of resistance versus rotation) to accommodate the need for constant input impedance.
Radio-frequency (RF) operation
In RF (radio frequency) applications, the L network is the basis of many common impedance matching circuits, such as the pi network employed in amplifiers and the T network that is common in transmatches.
The L network relies on a procedure known as series-parallel transformation. For every series combination of resistance, RS, and reactance, XS, there exists a parallel combination of RP and XP that acts identically to the voltage applied across the series combination. In other words, the series components and the parallel components provide the same impedance at their terminals. The transformation ratio is the ratio of the input and output impedances of the impedance matching network.
The series-parallel transformation allows the input impedance to be dropped down to lower impedances while sustaining a voltage across the circuit. This system works in reverse as well. The equations needed for this transformation are as follows:
For the resistance Rs and reactance Xs in series, Rp and Xp exist as a parallel combination. One simply needs to know the input impedance Rp and to choose the output impedance Rs. Or conversely know Rs and choose Rp. Keep in mind that Rp must be larger than Rs. Because reactance is frequency dependent the L network will only transform the impedances at one frequency.
Inclusion of two L networks back to back creates what is known as a T-network. T-networks work well for matching an even greater range of impedances.
Impedance matching
If a source and load are both resistive (i.e. Z1 and Z2 have zero or very small imaginary part) then a resistive L pad can be used to match them to each other. As shown, either side of the L pad can be the source or load, but the Z1 side must be the side with the higher impedance.
There is an inherent insertion loss
where = power dissipated by load and = power dissipated by the pad resistors. Large positive numbers means loss is large.
The loss is a monotonic function to the impedance ratio. Higher ratios require higher loss.
Application notes
Speaker L pads are designed to match the impedance of the speaker, so they were commonly available with 4, 8, and 16 Ω impedances.
See also
Π pad
T pad
Notes
References
Silver, H. Ward, Experiment #21: The L-Network (Hands-On Radio), QST, Oct. 2004, pp. 62-63
Basic Car Audio Electronics: "L-Pads" http://www.bcae1.com/lpad.htm
External links
All About Pads
L-Pads
Analog circuits
Resistive components | L pad | [
"Physics",
"Engineering"
] | 1,021 | [
"Physical quantities",
"Analog circuits",
"Resistive components",
"Electronic engineering",
"Electrical resistance and conductance"
] |
5,378,736 | https://en.wikipedia.org/wiki/Bioisostere | In medicinal chemistry, bioisosteres are chemical substituents or groups with similar physical or chemical properties which produce broadly similar biological properties in the same chemical compound. In drug design, the purpose of exchanging one bioisostere for another is to enhance the desired biological or physical properties of a compound without making significant changes in chemical structure. The main use of this term and its techniques are related to pharmaceutical sciences. Bioisosterism is used to reduce toxicity, change bioavailability, or modify the activity of the lead compound, and may alter the metabolism of the lead.
Examples
Classical bioisosteres
Classical bioisosterism was originally formulated by James Moir and refined by Irving Langmuir as a response to the observation that different atoms with the same valence electron structure had similar biological properties.
For example, the replacement of a hydrogen atom with a fluorine atom at a site of metabolic oxidation in a drug candidate may prevent such metabolism from taking place. Because the fluorine atom is similar in size to the hydrogen atom the overall topology of the molecule is not significantly affected, leaving the desired biological activity unaffected. However, with a blocked pathway for metabolism, the drug candidate may have a longer half-life.
Procainamide, an amide, has a longer duration of action than Procaine, an ester, because of the isosteric replacement of the ester oxygen with a nitrogen atom. Procainamide is a classical bioisostere because the valence electron structure of a disubstituted oxygen atom is the same as a trisubstituted nitrogen atom, as Langmuir showed.
Another example is seen in a series of anti-bacterial chalcones. By modifying certain substituents, the pharmacological activity of the chalcone and its toxicity are also modified.
Non-classical bioisosteres
Non-classical bioisosteres may differ in a multitude of ways from classical bioisosteres, but retain the focus on providing similar sterics and electronic profile to the original functional group. Whereas classical bioisosteres commonly conserve much of the same structural properties, nonclassical bioisosteres are much more dependent on the specific binding needs of the ligand in question and may substitute a linear functional group for a cyclic moiety, an alkyl group for a complex heteroatom moiety, or other changes that go far beyond a simple atom-for-atom switch.
For example, a chloride -Cl group may often be replaced by a trifluoromethyl -CF3 group or by a cyano -C≡N group. Depending on the particular molecule used, the substitution may result in little change in activity, or either increased or decreased affinity or efficacy - depending on what factors are important for ligand binding to the target protein. Another example is aromatic rings, where a phenyl -C6H5 ring can often be replaced by a different aromatic ring such as thiophene or naphthalene which may improve efficacy, change specificity of binding or reduce metabolically labile sites on the molecule, resulting in better pharmacokinetic properties.
Alloxanthine is an inhibitor of xanthine oxidase. It is also an isostere of xanthine, the normal substrate for the enzyme. Alloxanthine is considered a non-classical bioisostere because of the scaffold change.
Silafluofen is an organosilicon analogue of pyrethroid insecticide Etofenprox, wherein a carbon center has been replaced by isosteric silicon, and in addition, one hydrogen atom is replaced by isosteric fluorine atom.
Other applications
Bioisosteres of some patented compounds can be discovered automatically and used to circumvent Markush structure patent claims. It has been proposed that key force field features, that is the pharmacophore, be patented instead.
See also
Grimm's hydride displacement law, an early hypothesis to describe bioisosterism
References
Medicinal chemistry | Bioisostere | [
"Chemistry",
"Biology"
] | 831 | [
"Biochemistry",
"Medicinal chemistry",
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.