id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
2,016,336 | https://en.wikipedia.org/wiki/National%20CAD%20Standard | The National CAD Standard (NCS) is a collaborative effort in the United States among computer-aided design (CAD) and building information modeling (BIM) users. Its goal is to create a unified approach to the creation of building design data. Development of the NCS is open to all building professionals in a collaborative process led by the buildingSMART Alliance.
The NCS is composed of CAD layer guidelines from the American Institute of Architects, uniform drawing system modules from the Construction Specifications Institute, and BIM implementation and plotting guidelines from the National Institute of Building Sciences. Adoption of the NCS is voluntary, however adopting companies and agencies can require its use by their associates.
References
External links
Measurement | National CAD Standard | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 140 | [
"Computer-aided design",
"Design engineering",
"Physical quantities",
"Computer standards",
"Quantity",
"Measurement",
"Size",
"Architecture stubs",
"Architecture"
] |
2,017,203 | https://en.wikipedia.org/wiki/Nichols%20plot | The Nichols plot is a plot used in signal processing and control design, named after American engineer Nathaniel B. Nichols. It plots the phase response versus the response magnitude of a transfer function for any given frequency, and as such is useful in characterizing a system's frequency response.
Use in control design
Given a transfer function,
with the closed-loop transfer function defined as,
the Nichols plots displays versus . Loci of constant and are overlaid to allow the designer to obtain the closed loop transfer function directly from the open loop transfer function. Thus, the frequency is the parameter along the curve. This plot may be compared to the Bode plot in which the two inter-related graphs - versus and versus ) - are plotted.
In feedback control design, the plot is useful for assessing the stability and robustness of a linear system. This application of the Nichols plot is central to the quantitative feedback theory (QFT) of Horowitz and Sidi, which is a well known method for robust control system design.
In most cases, refers to the phase of the system's response. Although similar to a Nyquist plot, a Nichols plot is plotted in a Cartesian coordinate system while a Nyquist plot is plotted in a Polar coordinate system.
See also
Hall circles
Bode plot
Nyquist plot
Transfer function
References
External links
Mathematica function for creating the Nichols plot
Plots (graphics)
Signal processing
Classical control theory | Nichols plot | [
"Technology",
"Engineering"
] | 286 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
2,017,311 | https://en.wikipedia.org/wiki/Least%20mean%20squares%20filter | Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff, based on their research in single-layer neural networks (ADALINE). Specifically, they used gradient descent to train ADALINE to recognize patterns, and called the algorithm "delta rule". They then applied the rule to filters, resulting in the LMS algorithm.
Problem formulation
The picture shows the various parts of the filter. is the input signal, which is then transformed by an unknown filter that we wish to match using . The output from the unknown filter is , which is then interfered with a noise signal , producing . Then the error signal is computed, and it is fed back to the adaptive filter, to adjust its parameters in order to minimize the mean square of the error signal .
Relationship to the Wiener filter
The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution for input matrix and output vector
is
The FIR least mean squares filter is related to the Wiener filter, but minimizing the error criterion of the former does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.
Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system is to be identified and the adaptive filter attempts to adapt the filter to make it as close as possible to , while using only observable signals , and ; but , and are not directly observable. Its solution is closely related to the Wiener filter.
Definition of symbols
is the number of the current input sample
is the number of filter taps
(Hermitian transpose or conjugate transpose)
estimated filter; interpret as the estimation of the filter coefficients after samples
Idea
The basic idea behind LMS filter is to approach the optimum filter weights , by updating the filter weights in a manner to converge to the optimum filter weight. This is based on the gradient descent algorithm. The algorithm starts by assuming small weights (zero in most cases) and, at each step, by finding the gradient of the mean square error, the weights are updated. That is, if the MSE-gradient is positive, it implies the error would keep increasing positively if the same weight is used for further iterations, which means we need to reduce the weights. In the same way, if the gradient is negative, we need to increase the weights. The weight update equation is
where represents the mean-square error and is a convergence coefficient.
The negative sign shows that we go down the slope of the error, to find the filter weights, , which minimize the error.
The mean-square error as a function of filter weights is a quadratic function which means it has only one extremum, that minimizes the mean-square error, which is the optimal weight. The LMS thus, approaches towards this optimal weights by ascending/descending down the mean-square-error vs filter weight curve.
Derivation
The idea behind LMS filters is to use steepest descent to find filter weights which minimize a cost function.
We start by defining the cost function as
where is the error at the current sample n and denotes the expected value.
This cost function () is the mean square error, and it is minimized by the LMS. This is where the LMS gets its name. Applying steepest descent means to take the partial derivatives with respect to the individual entries of the filter coefficient (weight) vector
where is the gradient operator
Now, is a vector which points towards the steepest ascent of the cost function. To find the minimum of the cost function we need to take a step in the opposite direction of . To express that in mathematical terms
where is the step size (adaptation constant). That means we have found a sequential update algorithm which minimizes the cost function. Unfortunately, this algorithm is not realizable until we know .
Generally, the expectation above is not computed. Instead, to run the LMS in an online (updating after each new sample is received) environment, we use an instantaneous estimate of that expectation. See below.
Simplifications
For most systems the expectation function must be approximated. This can be done with the following unbiased estimator
where indicates the number of samples we use for that estimate. The simplest case is
For that simple case the update algorithm follows as
Indeed, this constitutes the update algorithm for the LMS filter.
LMS algorithm summary
The LMS algorithm for a th order filter can be summarized as
Convergence and stability in the mean
As the LMS algorithm does not use the exact values of the expectations, the weights would never reach the optimal weights in the absolute sense, but a convergence is possible in mean. That is, even though the weights may change by small amounts, it changes about the optimal weights. However, if the variance with which the weights change, is large, convergence in mean would be misleading. This problem may occur, if the value of step-size is not chosen properly.
If is chosen to be large, the amount with which the weights change depends heavily on the gradient estimate, and so the weights may change by a large value so that gradient which was negative at the first instant may now become positive. And at the second instant, the weight may change in the opposite direction by a large amount because of the negative gradient and would thus keep oscillating with a large variance about the optimal weights. On the other hand, if is chosen to be too small, time to converge to the optimal weights will be too large.
Thus, an upper bound on is needed which is given as
,
where is the greatest eigenvalue of the autocorrelation matrix . If this condition is not fulfilled, the algorithm becomes unstable and diverges.
Maximum convergence speed is achieved when
where is the smallest eigenvalue of .
Given that is less than or equal to this optimum, the convergence speed is determined by , with a larger value yielding faster convergence. This means that faster convergence can be achieved when is close to , that is, the maximum achievable convergence speed depends on the eigenvalue spread of .
A white noise signal has autocorrelation matrix where is the variance of the signal. In this case all eigenvalues are equal, and the eigenvalue spread is the minimum over all possible matrices.
The common interpretation of this result is therefore that the LMS converges quickly for white input signals, and slowly for colored input signals, such as processes with low-pass or high-pass characteristics.
It is important to note that the above upperbound on only enforces stability in the mean, but the coefficients of can still grow infinitely large, i.e. divergence of the coefficients is still possible. A more practical bound is
where denotes the trace of . This bound guarantees that the coefficients of do not diverge (in practice, the value of should not be chosen close to this upper bound, since it is somewhat optimistic due to approximations and assumptions made in the derivation of the bound).
Normalized least mean squares filter (NLMS)
The main drawback of the "pure" LMS algorithm is that it is sensitive to the scaling of its input . This makes it very hard (if not impossible) to choose a learning rate that guarantees stability of the algorithm (Haykin 2002). The Normalised least mean squares filter (NLMS) is a variant of the LMS algorithm that solves this problem by normalising with the power of the input. The NLMS algorithm can be summarised as:
Optimal learning rate
It can be shown that if there is no interference (), then the optimal learning rate for the NLMS algorithm is
and is independent of the input and the real (unknown) impulse response . In the general case with interference (), the optimal learning rate is
The results above assume that the signals and are uncorrelated to each other, which is generally the case in practice.
Proof
Let the filter misalignment be defined as , we can derive the expected misalignment for the next sample as:
Let and
Assuming independence, we have:
The optimal learning rate is found at , which leads to:
See also
Recursive least squares
For statistical techniques relevant to LMS filter see Least squares.
Similarities between Wiener and LMS
Multidelay block frequency domain adaptive filter
Zero-forcing equalizer
Kernel adaptive filter
Matched filter
Wiener filter
References
Monson H. Hayes: Statistical Digital Signal Processing and Modeling, Wiley, 1996,
Simon Haykin: Adaptive Filter Theory, Prentice Hall, 2002,
Simon S. Haykin, Bernard Widrow (Editor): Least-Mean-Square Adaptive Filters, Wiley, 2003,
Bernard Widrow, Samuel D. Stearns: Adaptive Signal Processing, Prentice Hall, 1985,
Weifeng Liu, Jose Principe and Simon Haykin: Kernel Adaptive Filtering: A Comprehensive Introduction, John Wiley, 2010,
Paulo S.R. Diniz: Adaptive Filtering: Algorithms and Practical Implementation, Kluwer Academic Publishers, 1997,
External links
LMS Algorithm in Adaptive Antenna Arrays www.antenna-theory.com
LMS Noise cancellation demo www.advsolned.com
Digital signal processing
Filter theory
Statistical algorithms | Least mean squares filter | [
"Engineering"
] | 1,962 | [
"Telecommunications engineering",
"Filter theory"
] |
2,018,430 | https://en.wikipedia.org/wiki/Jack%20Dangermond | Jack Dangermond (born 1945) is an American billionaire businessman and environmental scientist, who in 1969 co-founded, with Laura Dangermond, the Environmental Systems Research Institute (Esri), a privately held geographic information systems (GIS) software company. As of July 2023, his net worth was estimated at US$9.3 billion.
Dangermond, Esri's president, works at its headquarters in Redlands, California. He founded the company to perform land-use analysis; however, its focus evolved into GIS-software development, highlighted by the release of ARC/INFO in the early 1980s. The development and marketing of ARC/INFO positioned Esri with the dominant market share among GIS-software developers. Esri's flagship product, ArcGIS, traces its heritage to Dangermond's initial efforts in developing ARC/INFO.
Career
Dangermond grew up in Redlands, the son of Dutch immigrants. His parents owned a plant nursery in the town. Dangermond attended Redlands High School.
Dangermond completed his undergraduate degree in landscape architecture at California State Polytechnic University, Pomona. He then earned a Master in Urban Planning from the University of Minnesota, and a Master of Landscape Architecture degree from the Harvard University Graduate School of Design in 1969. His early work in the school's Laboratory for Computer Graphics and Spatial Analysis (LCGSA) led directly to the development of Esri's ARC/INFO GIS software. He has been awarded 13 honorary doctoral degrees.
Philanthropy
In December 2017, Jack and Laura Dangermond donated $165 million to establish the Jack and Laura Dangermond Preserve on the Pacific coast—the largest ever gift to The Nature Conservancy.
Jack and Laura Dangermond have signed The Giving Pledge.
In January 2020, Jack and Laura Dangermond donated $3 million to the Museum of Redlands fund.
In 2005, Jack helped Duane Marble establish the American Association of Geographers Marble Fund for Geographic Science. This fund serves to advance GIScience education by providing awards to undergraduate and graduate student research. These awards include the "Marble-Boyle Undergraduate Achievement Award," "William L. Garrison Award for Best Dissertation in Computational Geography," and the "Marble Fund Award for Innovative Master's Research in Quantitative Geography."
Honors
Dangermond has received many awards, including:
Officier in de Orde van Oranje Nassau
Horwood Distinguished Service Award of the Urban and Regional Information Systems Association in 1988
John Wesley Powell Award of the U.S. Geological Survey in 1996
Anderson Medal of the Association of American Geographers in 1998
Cullum Geographical Medal of the American Geographical Society in 1999
EDUCAUSE Medal of EduCause
Honorary doctorate from the University of West-Hungary in 2003
Carl Mannerfelt Gold Medal of the International Cartographic Association in 2007
Honorary doctorate from the University of Minnesota in 2008
Patron's Medal of the Royal Geographical Society in 2010.
Alexander Graham Bell Medal of the National Geographic Society in 2010, together with Roger Tomlinson.
Fellow of the University Consortium for Geographic Information Science in 2012
Recipient of the Lifetime Achievement Award (Champions of the Earth) in 2013.
Audubon Medal of the National Audubon Society in 2015
See also
References
External links
Jack Dangermond, Esri President – Biographical information on Esri's Web site
1945 births
Living people
American people of Dutch descent
American billionaires
American geographers
Businesspeople in software
California State Polytechnic University, Pomona alumni
American environmental scientists
Humphrey School of Public Affairs alumni
Harvard Graduate School of Design alumni
Recipients of the Cullum Geographical Medal
National Geographic Society medals recipients
American technology company founders
21st-century American philanthropists
Recipients of the Royal Geographical Society Patron's Medal
Geographic data and information professionals | Jack Dangermond | [
"Environmental_science"
] | 748 | [
"American environmental scientists",
"Environmental scientists"
] |
1,381,368 | https://en.wikipedia.org/wiki/Superlattice | A superlattice is a periodic structure of layers of two (or more) materials. Typically, the thickness of one layer is several nanometers. It can also refer to a lower-dimensional structure such as an array of quantum dots or quantum wells.
Discovery
Superlattices were discovered early in 1925 by Johansson and Linde after the studies on gold–copper and palladium–copper systems through their special X-ray diffraction patterns. Further experimental observations and theoretical modifications on the field were done by Bradley and Jay, Gorsky, Borelius, Dehlinger and Graf, Bragg and Williams and Bethe. Theories were based on the transition of arrangement of atoms in crystal lattices from disordered state to an ordered state.
Mechanical properties
J.S. Koehler theoretically predicted that by using alternate (nano-)layers of materials with high and low elastic constants, shearing resistance is improved by up to 100 times as the Frank–Read source of dislocations cannot operate in the nanolayers.
The increased mechanical hardness of such superlattice materials was confirmed firstly by Lehoczky in 1978 on Al-Cu and Al-Ag, and later on by several others, such as Barnett and Sproul on hard PVD coatings.
Semiconductor properties
If the superlattice is made of two semiconductor materials with different band gaps, each quantum well sets up new selection rules that affect the conditions for charges to flow through the structure. The two different semiconductor materials are deposited alternately on each other to form a periodic structure in the growth direction. Since the 1970 proposal of synthetic superlattices by Esaki and Tsu, advances in the physics of such ultra-fine semiconductors, presently called quantum structures, have been made. The concept of quantum confinement has led to the observation of quantum size effects in isolated quantum well heterostructures and is closely related to superlattices through the tunneling phenomena. Therefore, these two ideas are often discussed on the same physical basis, but each has different physics useful for applications in electric and optical devices.
Semiconductor superlattice types
Superlattice miniband structures depend on the heterostructure type, either type I, type II or type III. For type I the bottom of the conduction band and the top of the valence subband are formed in the same semiconductor layer. In type II the conduction and valence subbands are staggered in both real and reciprocal space, so that electrons and holes are confined in different layers. Type III superlattices involve semimetal material, such as HgTe/CdTe. Although the bottom of the conduction subband and the top of the valence subband are formed in the same semiconductor layer in Type III superlattice, which is similar with Type I superlattice, the band gap of Type III superlattices can be continuously adjusted from semiconductor to zero band gap material and to semimetal with negative band gap.
Another class of quasiperiodic superlattices is named after Fibonacci. A Fibonacci superlattice can be viewed as a one-dimensional quasicrystal, where either electron hopping transfer or on-site energy takes two values arranged in a Fibonacci sequence.
Semiconductor materials
Semiconductor materials, which are used to fabricate the superlattice structures, may be divided by the element groups, IV, III-V and II-VI. While group III-V semiconductors (especially GaAs/AlxGa1−xAs) have been extensively studied, group IV heterostructures such as the SixGe1−x system are much more difficult to realize because of the large lattice mismatch. Nevertheless, the strain modification of the subband structures is interesting in these quantum structures and has attracted much attention.
In the GaAs/AlAs system both the difference in lattice constant between GaAs and AlAs and the difference of their thermal expansion coefficient are small. Thus, the remaining strain at room temperature can be minimized after cooling from epitaxial growth temperatures. The first compositional superlattice was realized using the GaAs/AlxGa1−xAs material system.
A graphene/boron nitride system forms a semiconductor superlattice once the two crystals are aligned. Its charge carriers move perpendicular to the electric field, with little energy dissipation. h-BN has a hexagonal structure similar to graphene's. The superlattice has broken inversion symmetry. Locally, topological currents are comparable in strength to the applied current, indicating large valley-Hall angles.
Production
Superlattices can be produced using various techniques, but the most common are molecular-beam epitaxy (MBE) and sputtering. With these methods, layers can be produced with thicknesses of only a few atomic spacings. An example of specifying a superlattice is []20. It describes a bi-layer of 20Å of Iron (Fe) and 30Å of Vanadium (V) repeated 20 times, thus yielding a total thickness of 1000Å or 100 nm. The MBE technology as a means of fabricating semiconductor superlattices is of primary importance. In addition to the MBE technology, metal-organic chemical vapor deposition (MO-CVD) has contributed to the development of superconductor superlattices, which are composed of quaternary III-V compound semiconductors like InGaAsP alloys. Newer techniques include a combination of gas source handling with ultrahigh vacuum (UHV) technologies such as metal-organic molecules as source materials and gas-source MBE using hybrid gases such as arsine () and phosphine () have been developed.
Generally speaking MBE is a method of using three temperatures in binary systems, e.g., the substrate temperature, the source material temperature of the group III and the group V elements in the case of III-V compounds.
The structural quality of the produced superlattices can be verified by means of X-ray diffraction or neutron diffraction spectra which contain characteristic satellite peaks. Other effects associated with the alternating layering are: giant magnetoresistance, tunable reflectivity for X-ray and neutron mirrors, neutron spin polarization, and changes in elastic and acoustic properties. Depending on the nature of its components, a superlattice may be called magnetic, optical or semiconducting.
Miniband structure
The schematic structure of a periodic superlattice is shown below, where A and B are two semiconductor materials of respective layer thickness a and b (period: ). When a and b are not too small compared with the interatomic spacing, an adequate approximation is obtained by replacing these fast varying potentials by an effective potential derived from the band structure of the original bulk semiconductors. It is straightforward to solve 1D Schrödinger equations in each of the individual layers, whose solutions are linear combinations of real or imaginary exponentials.
For a large barrier thickness, tunneling is a weak perturbation with regard to the uncoupled dispersionless states, which are fully confined as well. In this case the dispersion relation , periodic over with over by virtue of the Bloch theorem, is fully sinusoidal:
and the effective mass changes sign for :
In the case of minibands, this sinusoidal character is no longer preserved. Only high up in the miniband (for wavevectors well beyond ) is the top actually 'sensed' and does the effective mass change sign. The shape of the miniband dispersion influences miniband transport profoundly and accurate dispersion relation calculations are required given wide minibands. The condition for observing single miniband transport is the absence of interminiband transfer by any process. The thermal quantum kBT should be much smaller than the energy difference between the first and second miniband, even in the presence of the applied electric field.
Bloch states
For an ideal superlattice a complete set of eigenstates states can be constructed by products of plane waves and a z-dependent function which satisfies the eigenvalue equation
.
As and are periodic functions with the superlattice period d, the eigenstates are Bloch state with energy . Within first-order perturbation theory in k2, one obtains the energy
.
Now, will exhibit a larger probability in the well, so that it seems reasonable to replace the second term by
where is the effective mass of the quantum well.
Wannier functions
By definition the Bloch functions are delocalized over the whole superlattice. This may provide difficulties if electric fields are applied or effects due to the superlattice's finite length are considered. Therefore, it is often helpful to use different sets of basis states that are better localized. A tempting choice would be the use of eigenstates of single quantum wells. Nevertheless, such a choice has a severe shortcoming: the corresponding states are solutions of two different Hamiltonians, each neglecting the presence of the other well. Thus these states are not orthogonal, creating complications. Typically, the coupling is estimated by the transfer Hamiltonian within this approach. For these reasons, it is more convenient to use the set of Wannier functions.
Wannier–Stark ladder
Applying an electric field F to the superlattice structure causes the Hamiltonian to exhibit an additional scalar potential eφ(z) = −eFz that destroys the translational invariance. In this case, given an eigenstate with wavefunction and energy , then the set of states corresponding to wavefunctions are eigenstates of the Hamiltonian with energies Ej = E0 − jeFd. These states are equally spaced both in energy and real space and form the so-called Wannier–Stark ladder. The potential is not bounded for the infinite crystal, which implies a continuous energy spectrum. Nevertheless, the characteristic energy spectrum of these Wannier–Stark ladders could be resolved experimentally.
Transport
The motion of charge carriers in a superlattice is different from that in the individual layers: mobility of charge carriers can be enhanced, which is beneficial for high-frequency devices, and specific optical properties are used in semiconductor lasers.
If an external bias is applied to a conductor, such as a metal or a semiconductor, typically an electric current is generated. The magnitude of this current is determined by the band structure of the material, scattering processes, the applied field strength and the equilibrium carrier distribution of the conductor.
A particular case of superlattices called superstripes are made of superconducting units separated by spacers. In each miniband the superconducting order parameter, called the superconducting gap, takes different values, producing a multi-gap, or two-gap or multiband superconductivity.
Recently, Felix and Pereira investigated the thermal transport by phonons in periodic and quasiperiodic superlattices of graphene-hBN according to the Fibonacci sequence. They reported that the contribution of coherent thermal transport (phonons like-wave) was suppressed as quasiperiodicity increased.
Other dimensionalities
Soon after two-dimensional electron gases (2DEG) had become commonly available for experiments, research groups attempted to create structures that could be called 2D artificial crystals. The idea is to subject the electrons confined to an interface between two semiconductors (i.e. along z-direction) to an additional modulation potential . Contrary to the classical superlattices (1D/3D, that is 1D modulation of electrons in 3D bulk) described above, this is typically achieved by treating the heterostructure surface: depositing a suitably patterned metallic gate or etching. If the amplitude of V(x,y) is large ( as an example) compared to the Fermi level, , the electrons in the superlattice should behave similarly to electrons in an atomic crystal with square lattice (in the example, these "atoms" would be located at positions () where n,m are integers).
The difference is in the length and energy scales. Lattice constants of atomic crystals are of the order of 1Å while those of superlattices (a) are several hundreds or thousands larger as dictated by technological limits (e.g. electron-beam lithography used for the patterning of the heterostructure surface). Energies are correspondingly smaller in superlattices. Using the simple quantum-mechanically confined-particle model suggests . This relation is only a rough guide and actual calculations with currently topical graphene (a natural atomic crystal) and artificial graphene (superlattice) show that characteristic band widths are of the order of 1 eV and 10 meV, respectively. In the regime of weak modulation (), phenomena like commensurability oscillations or fractal energy spectra (Hofstadter butterfly) occur.
Artificial two-dimensional crystals can be viewed as a 2D/2D case (2D modulation of a 2D system) and other combinations are experimentally available: an array of quantum wires (1D/2D) or 3D/3D photonic crystals.
Applications
The superlattice of palladium-copper system is used in high performance alloys to enable a higher electrical conductivity, which is favored by the ordered structure. Further alloying elements like silver, rhenium, rhodium and ruthenium are added for better mechanical strength and high temperature stability. This alloy is used for probe needles in probe cards.
See also
Cu-Pt type ordering in III-V semiconductor
Tube-based nanostructures
Wannier function
References
H.T. Grahn, "Semiconductor Superlattices", World Scientific (1995).
Morten Jagd Christensen, "Epitaxy, Thin Films and Superlattices", Risø National Laboratory, (1997).
C. Hamaguchi, "Basic Semiconductor Physics", Springer (2001).
Further reading
Condensed matter physics
Spintronics | Superlattice | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,903 | [
"Spintronics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
1,382,381 | https://en.wikipedia.org/wiki/Orthogonal%20transformation | In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair of elements of V, we have
Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases.
Orthogonal transformations are injective: if then , hence , so the kernel of is trivial.
Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions.
In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of V. The columns of the matrix form another orthonormal basis of V.
If an orthogonal transformation is invertible (which is always the case when V is finite-dimensional) then its inverse is another orthogonal transformation identical to the transpose of : .
Examples
Consider the inner-product space with the standard Euclidean inner product and standard basis. Then, the matrix transformation
is orthogonal. To see this, consider
Then,
The previous example can be extended to construct all orthogonal transformations. For example, the following matrices define orthogonal transformations on :
See also
Geometric transformation
Improper rotation
Linear transformation
Orthogonal matrix
Rigid transformation
Unitary transformation
References
Linear algebra | Orthogonal transformation | [
"Mathematics"
] | 403 | [
"Linear algebra",
"Algebra"
] |
1,383,899 | https://en.wikipedia.org/wiki/Linear%20time-invariant%20system | In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response of the system to an arbitrary input can be found directly using convolution: where is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining ), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
Linear time-invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy, and many other technical areas where systems of ordinary differential equations present themselves.
Overview
The defining properties of any LTI system are linearity and time invariance.
Linearity means that the relationship between the input and the output , both being regarded as functions, is a linear mapping: If is a constant then the system output to is ; if is a further input with system output then the output of the system to is , this applying for all choices of , , . The latter condition is often referred to as the superposition principle.
Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input is , then the output due to input is . Hence, the system is time invariant because the output does not depend on the particular time the input is applied.
The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system's impulse response. The output of the system is simply the convolution of the input to the system with the system's impulse response . This is called a continuous time system. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating in discrete time: where y, x, and h are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral.
LTI systems can also be characterized in the frequency domain by the system's transfer function, which is the Laplace transform of the system's impulse response (or Z transform in the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.
For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. This is, if the input to a system is the complex waveform for some complex amplitude and complex frequency , the output will be some complex constant times the input, say for some new complex amplitude . The ratio is the transfer function at frequency .
Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a different amplitude and a different phase, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input.
LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.
Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals.
A linear system that is not time-invariant can be solved using other approaches such as the Green function method.
Continuous-time systems
Impulse response and convolution
The behavior of a linear, continuous-time, time-invariant system with input signal x(t) and output signal y(t) is described by the convolution integral:
{|
|
|
|-
|
| (using commutativity)
|}
where is the system's response to an impulse: . is therefore proportional to a weighted average of the input function . The weighting function is , simply shifted by amount . As changes, the weighting function emphasizes different parts of the input function. When is zero for all negative , depends only on values of prior to time , and the system is said to be causal.
To understand why the convolution produces the output of an LTI system, let the notation represent the function with variable and constant . And let the shorter notation represent . Then a continuous-time system transforms an input function, into an output function, . And in general, every value of the output can depend on every value of the input. This concept is represented by:
where is the transformation operator for time . In a typical system, depends most heavily on the values of that occurred near time . Unless the transform itself changes with , the output function is just constant, and the system is uninteresting.
For a linear system, must satisfy :
And the time-invariance requirement is:
In this notation, we can write the impulse response as
Similarly:
{|
|
|
|-
|
| (using )
|}
Substituting this result into the convolution integral:
which has the form of the right side of for the case and
then allows this continuation:
In summary, the input function, , can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown at . The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse responses, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral.
The mathematical operations above have a simple graphical simulation.
Exponentials as eigenfunctions
An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is,
where f is the eigenfunction and is the eigenvalue, a constant.
The exponential functions , where , are eigenfunctions of a linear, time-invariant operator. A simple proof illustrates this concept. Suppose the input is . The output of the system with impulse response is then
which, by the commutative property of convolution, is equivalent to
where the scalar
is dependent only on the parameter s.
So the system's response is a scaled version of the input. In particular, for any , the system output is the product of the input and the constant . Hence, is an eigenfunction of an LTI system, and the corresponding eigenvalue is .
Direct proof
It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems.
Let's set some complex exponential and a time-shifted version of it.
by linearity with respect to the constant .
by time invariance of .
So . Setting and renaming we get:
i.e. that a complex exponential as input will give a complex exponential of same frequency as output.
Fourier and Laplace transforms
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sided Laplace transform
is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form where and ). The Fourier transform gives the eigenvalues for pure complex sinusoids. Both of and are called the system function, system response, or transfer function.
The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values of t less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the bilateral Laplace transform).
The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not square integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist
One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency , where , we obtain |H(s)| which is the system gain for frequency f. The relative phase shift between the output and input for that frequency component is likewise given by arg(H(s)).
Examples
Important system properties
Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing.
Causality
A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is
where is the impulse response. It is not possible in general to determine causality from the two-sided Laplace transform. However, when working in the time domain, one normally uses the one-sided Laplace transform which requires causality.
Stability
A system is bounded-input, bounded-output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying
leads to an output satisfying
(that is, a finite maximum absolute value of implies a finite maximum absolute value of ), then the system is stable. A necessary and sufficient condition is that , the impulse response, is in L1 (has a finite L1 norm):
In the frequency domain, the region of convergence must contain the imaginary axis .
As an example, the ideal low-pass filter with impulse response equal to a sinc function is not BIBO stable, because the sinc function does not have a finite L1 norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero for and equal to a sinusoid at the cut-off frequency for , then the output will be unbounded for all times other than the zero crossings.
Discrete-time systems
Almost everything in continuous-time systems has a counterpart in discrete-time systems.
Discrete-time systems from continuous-time systems
In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to.
In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If is a CT signal, then the sampling circuit used before an analog-to-digital converter will transform it to a DT signal:
where T is the sampling period. Before sampling, the input signal is normally run through a so-called Nyquist filter which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component above the folding frequency (or Nyquist frequency) is aliased to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency.
Impulse response and convolution
Let represent the sequence
And let the shorter notation represent
A discrete system transforms an input sequence, into an output sequence, In general, every element of the output can depend on every element of the input. Representing the transformation operator by , we can write:
Note that unless the transform itself changes with n, the output sequence is just constant, and the system is uninteresting. (Thus the subscript, n.) In a typical system, y[n] depends most heavily on the elements of x whose indices are near n.
For the special case of the Kronecker delta function, the output sequence is the impulse response:
For a linear system, must satisfy:
And the time-invariance requirement is:
In such a system, the impulse response, , characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity:
which expresses in terms of a sum of weighted delta functions.
Therefore:
where we have invoked for the case and .
And because of , we may write:
Therefore:
{|
|
|
|-
|
| (commutativity)
|}
which is the familiar discrete convolution formula. The operator can therefore be interpreted as proportional to a weighted average of the function x[k].
The weighting function is h[−k], simply shifted by amount n. As n changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at n=0 is a "time" reversed copy of the unshifted weighting function. When h[k] is zero for all negative k, the system is said to be causal.
Exponentials as eigenfunctions
An eigenfunction is a function for which the output of the operator is the same function, scaled by some constant. In symbols,
where f is the eigenfunction and is the eigenvalue, a constant.
The exponential functions , where , are eigenfunctions of a linear, time-invariant operator. is the sampling interval, and . A simple proof illustrates this concept.
Suppose the input is . The output of the system with impulse response is then
which is equivalent to the following by the commutative property of convolution
where
is dependent only on the parameter z.
So is an eigenfunction of an LTI system because the system response is the same as the input times the constant .
Z and discrete-time Fourier transforms
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The Z transform
is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids; i.e. exponentials of the form , where . These can also be written as with . The discrete-time Fourier transform (DTFT) gives the eigenvalues of pure sinusoids. Both of and are called the system function, system response, or transfer function.
Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transform Fourier series may be used for analyzing periodic signals.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is,
Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior.
Examples
Important system properties
The input-output characteristics of discrete-time LTI system are completely described by its impulse response .
Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function is stable.
Causality
A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input. A necessary and sufficient condition for causality is
where is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique. When a region of convergence is specified, then causality can be determined.
Stability
A system is bounded input, bounded output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if
implies that
(that is, if bounded input implies bounded output, in the sense that the maximum absolute values of and are finite), then the system is stable. A necessary and sufficient condition is that , the impulse response, satisfies
In the frequency domain, the region of convergence must contain the unit circle (i.e., the locus satisfying for complex z).
Notes
See also
Circulant matrix
Frequency response
Impulse response
System analysis
Green function
Signal-flow graph
References
Further reading
External links
ECE 209: Review of Circuits as LTI Systems – Short primer on the mathematical analysis of (electrical) LTI systems.
ECE 209: Sources of Phase Shift – Gives an intuitive explanation of the source of phase shift in two common electrical LTI systems.
JHU 520.214 Signals and Systems course notes. An encapsulated course on LTI system theory. Adequate for self teaching.
LTI system example: RC low-pass filter. Amplitude and phase response.
Digital signal processing
Electrical engineering
Classical control theory
Signal processing
Frequency-domain analysis
Time domain analysis | Linear time-invariant system | [
"Physics",
"Technology",
"Engineering"
] | 3,967 | [
"Telecommunications engineering",
"Computer engineering",
"Spectrum (physical sciences)",
"Signal processing",
"Frequency-domain analysis",
"Electrical engineering"
] |
1,384,005 | https://en.wikipedia.org/wiki/Absorption%20%28electromagnetic%20radiation%29 | In physics, absorption of electromagnetic radiation is how matter (typically electrons bound in atoms) takes up a photon's energy—and so transforms electromagnetic energy into internal energy of the absorber (for example, thermal energy).
A notable effect of the absorption of electromagnetic radiation is attenuation of the radiation; attenuation is the gradual reduction of the intensity of light waves as they propagate through the medium.
Although the absorption of waves does not usually depend on their intensity (linear absorption), in certain conditions (optics) the medium's transparency changes by a factor that varies as a function of wave intensity, and saturable absorption (or nonlinear absorption) occurs.
Quantifying absorption
Many approaches can potentially quantify radiation absorption, with key examples following.
The absorption coefficient along with some closely related derived quantities
The attenuation coefficient (NB used infrequently with meaning synonymous with "absorption coefficient")
The Molar attenuation coefficient (also called "molar absorptivity"), which is the absorption coefficient divided by molarity (see also Beer–Lambert law)
The mass attenuation coefficient (also called "mass extinction coefficient"), which is the absorption coefficient divided by density
The absorption cross section and scattering cross-section, related closely to the absorption and attenuation coefficients, respectively
"Extinction" in astronomy, which is equivalent to the attenuation coefficient
Other measures of radiation absorption, including penetration depth and skin effect, propagation constant, attenuation constant, phase constant, and complex wavenumber, complex refractive index and extinction coefficient, complex dielectric constant, electrical resistivity and conductivity.
Related measures, including absorbance (also called "optical density") and optical depth (also called "optical thickness")
All these quantities measure, at least to some extent, how well a medium absorbs radiation. Which among them practitioners use varies by field and technique, often due simply to the convention.
Measuring absorption
The absorbance of an object quantifies how much of the incident light is absorbed by it (instead of being reflected or refracted). This may be related to other properties of the object through the Beer–Lambert law.
Precise measurements of the absorbance at many wavelengths allow the identification of a substance via absorption spectroscopy, where a sample is illuminated from one side, and the intensity of the light that exits from the sample in every direction is measured. A few examples of absorption are ultraviolet–visible spectroscopy, infrared spectroscopy, and X-ray absorption spectroscopy.
Applications
Understanding and measuring the absorption of electromagnetic radiation has a variety of applications.
In radio propagation, it is represented in non-line-of-sight propagation. For example, see computation of radio wave attenuation in the atmosphere used in satellite link design.
In meteorology and climatology, global and local temperatures depend in part on the absorption of radiation by atmospheric gases (such as in the greenhouse effect) and land and ocean surfaces (see albedo).
In medicine, X-rays are absorbed to different extents by different tissues (bone in particular), which is the basis for X-ray imaging.
In chemistry and materials science, different materials and molecules absorb radiation to different extents at different frequencies, which allows for material identification.
In optics, sunglasses, colored filters, dyes, and other such materials are designed specifically with respect to which visible wavelengths they absorb, and in what proportions they are in.
In biology, photosynthetic organisms require that light of the appropriate wavelengths be absorbed within the active area of chloroplasts, so that the light energy can be converted into chemical energy within sugars and other molecules.
In physics, the D-region of Earth's ionosphere is known to significantly absorb radio signals that fall within the high-frequency electromagnetic spectrum.
In nuclear physics, absorption of nuclear radiations can be used for measuring the fluid levels, densitometry or thickness measurements.
In scientific literature is known a system of mirrors and lenses that with a laser "can enable any material to absorb all light from a wide range of angles."
See also
Absorption spectroscopy
Albedo
Attenuation
Electromagnetic absorption by water
Hydroxyl ion absorption
Optoelectronics
Photoelectric effect
Photosynthesis
Solar cell
Spectral line
Total absorption spectroscopy
Ultraviolet-visible spectroscopy
References
Scattering, absorption and radiative transfer (optics)
Electromagnetic radiation
Glass physics
Radiation
Spectroscopy | Absorption (electromagnetic radiation) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 896 | [
"Transport phenomena",
"Glass engineering and science",
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Molecular physics",
"Spectrum (physical sciences)",
"Electromagnetic radiation",
"Instrumental analysis",
"Waves",
"Glass physics",
"Radiation",
"Scattering",
"Conden... |
1,386,588 | https://en.wikipedia.org/wiki/21st%20Century%20Medicine | 21st Century Medicine (21CM) is a California cryobiological research company which has as its primary focus the development of perfusates and protocols for viable long-term cryopreservation of human organs, tissues and cells at temperatures below −100 °C through the use of vitrification. 21CM was founded in 1993.
In 2004 21CM received a $900,000 grant from the U.S. National Institutes of Health (NIH) to study a preservation solution developed by the University of Rochester in New York for extending simple cold storage time of human hearts removed for transplant.
At the July 2005 annual conference of the Society for Cryobiology, 21st Century Medicine announced the vitrification of a rabbit kidney to −135 °C with their vitrification mixture. The kidney was successfully transplanted upon rewarming to a rabbit, the rabbit being euthanized on the 48th day for histological follow-up.
On February 9, 2016, 21st Century Medicine won the Small Mammal Brain Preservation Prize. On March 13, 2018, they won the Large Mammal Brain Preservation Prize.
References
External links
Official website
Cryobiology
Cryogenics | 21st Century Medicine | [
"Physics",
"Chemistry",
"Biology"
] | 235 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Cryogenics",
"Cryobiology",
"Biochemistry"
] |
2,809,585 | https://en.wikipedia.org/wiki/Photobleaching | In optics, photobleaching (sometimes termed fading) is the photochemical alteration of a dye or a fluorophore molecule such that it is permanently unable to fluoresce. This is caused by cleaving of covalent bonds or non-specific reactions between the fluorophore and surrounding molecules. Such irreversible modifications in covalent bonds are caused by transition from a singlet state to the triplet state of the fluorophores. The number of excitation cycles to achieve full bleaching varies. In microscopy, photobleaching may complicate the observation of fluorescent molecules, since they will eventually be destroyed by the light exposure necessary to stimulate them into fluorescing. This is especially problematic in time-lapse microscopy.
However, photobleaching may also be used prior to applying the (primarily antibody-linked) fluorescent molecules, in an attempt to quench autofluorescence. This can help improve the signal-to-noise ratio.
Photobleaching may also be exploited to study the motion and/or diffusion of molecules, for example via the FRAP, in which movement of cellular components can be confirmed by observing a recovery of fluorescence at the site of photobleaching, or FLIP techniques, in which multiple rounds of photobleaching is done so that the spread of fluorescence loss can be observed in cell.
Loss of activity caused by photobleaching can be controlled by reducing the intensity or time-span of light exposure, by increasing the concentration of fluorophores, by reducing the frequency and thus the photon energy of the input light, or by employing more robust fluorophores that are less prone to bleaching (e.g. Cyanine Dyes, Alexa Fluors or DyLight Fluors, AttoDyes, Janelia Dyes and others). To a reasonable approximation, a given molecule will be destroyed after a constant exposure (intensity of emission X emission time X number of cycles) because, in a constant environment, each absorption-emission cycle has an equal probability of causing photobleaching.
Photobleaching is an important parameter to account for in real-time single-molecule fluorescence imaging in biophysics. At light intensities used in single-molecule fluorescence imaging (0.1-1 kW/cm2 in typical experimental setups), even most robust fluorophores continue to emit for up to 10 seconds before photobleaching in a single step. For some dyes, lifetimes can be prolonged 10-100 fold using oxygen scavenging systems (up to 1000 seconds with optimisation of imaging parameters and signal-to-noise). For example, a combination of Protocatechuic acid (PCA) and protocatechuate 3,4-dioxygenase (PCD) is often used as oxygen scavenging system, and that increases fluorescence lifetime by more than a minute.
Depending on their specific chemistry, molecules can photobleach after absorbing just a few photons, while more robust molecules can undergo many absorption/emission cycles before destruction:
Green fluorescent protein: 104–105 photons; 0.1–1.0 second lifetime.
Typical organic dye: 105–106 photons; 1–10 second lifetime.
CdSe/ZnS quantum dot: 108 photons; > 1,000 seconds lifetime.
This use of the term "lifetime" is not to be confused with the "lifetime" measured by fluorescence lifetime imaging.
See also
Ozone depletion
References
External links
Introduction to Optical Microscopy an article about photobleaching
Microscopy
Fluorescence
Cell imaging
Cell biology
Articles containing video clips | Photobleaching | [
"Chemistry",
"Biology"
] | 754 | [
"Luminescence",
"Fluorescence",
"Cell biology",
"Microscopy",
"Cell imaging"
] |
2,812,893 | https://en.wikipedia.org/wiki/Yilmaz%20theory%20of%20gravitation | The Yilmaz theory of gravitation is an attempt by Huseyin Yilmaz (1924–2013; Turkish: Hüseyin Yılmaz) and his coworkers to formulate a classical field theory of gravitation which is similar to general relativity in weak-field conditions, but in which event horizons cannot appear.
Yilmaz's work has been criticized on the grounds that:
his proposed field equation is ill-defined
event horizons can occur in weak field situations according to the general theory of relativity, in the case of a supermassive black hole
the theory is consistent only with either a completely empty universe or a negative energy vacuum
It is well known that attempts to quantize general relativity along the same lines which lead from Maxwell's classical field theory of electromagnetism to quantum electrodynamics fail, and that it has proven very difficult to construct a theory of quantum gravity which goes over to general relativity in an appropriate limit. However Yilmaz has claimed that his theory is "compatible with quantum mechanics". He suggests that it might be an alternative to superstring theory.
In his theory, Yilmaz wishes to retain the left hand side of the Einstein field equation (namely the Einstein tensor, which is well-defined for any Lorentzian manifold, independent of general relativity) but to modify the right hand side, the stress–energy tensor, by adding a kind of gravitational contribution, namely the scalar field. According to Yilmaz's critics, this additional term is not well-defined, and cannot be made well defined due to issues with covariance.
No astronomers have tested his ideas, although some have tested competitors of general relativity; see :Category:Tests of general relativity.
References
In this paper, Charles Misner argues that Yilmaz's field equation is ill-defined.
In this preprint, Edward Fackerell criticizes several claims by Yilmaz concerning gtr
See section 20.4 for nonlocal nature of gravitational field energy, and all of chapter 20 for relation between integration, Bianchi identities, and 'conservation laws' in curved spacetimes.
External links
One page in the websiteRelativity on the World Wide Web (archived link) lists some apparent misstatements by Yilmaz concerning the general theory of relativity, similar to those discussed by Fackerell.
Theories of gravity | Yilmaz theory of gravitation | [
"Physics"
] | 489 | [
"Theoretical physics",
"Theories of gravity"
] |
2,812,977 | https://en.wikipedia.org/wiki/Pair%20of%20pants%20%28mathematics%29 | In mathematics, a pair of pants is a surface which is homeomorphic to the three-holed sphere. The name comes from considering one of the removed disks as the waist and the two others as the cuffs of a pair of pants.
Pairs of pants are used as building blocks for compact surfaces in various theories. Two important applications are to hyperbolic geometry, where decompositions of closed surfaces into pairs of pants are used to construct the Fenchel-Nielsen coordinates on Teichmüller space, and in topological quantum field theory where they are the simplest non-trivial cobordisms between 1-dimensional manifolds.
Pants and pants decomposition
Pants as topological surfaces
A pair of pants is any surface that is homeomorphic to a sphere with three holes, which formally is the result of removing from the sphere three open disks with pairwise disjoint closures. Thus a pair of pants is a compact surface of genus zero with three boundary components.
The Euler characteristic of a pair of pants is equal to −1, and the only other surface with this property is the punctured torus (a torus minus an open disk).
Pants decompositions
The importance of the pairs of pants in the study of surfaces stems from the following property: define the complexity of a connected compact surface of genus with boundary components to be , and for a non-connected surface take the sum over all components. Then the only surfaces with negative Euler characteristic and complexity zero are disjoint unions of pairs of pants. Furthermore, for any surface and any simple closed curve on which is not homotopic to a boundary component, the compact surface obtained by cutting along has a complexity that is strictly less than . In this sense, pairs of pants are the only "irreducible" surfaces among all surfaces of negative Euler characteristic.
By a recursion argument, this implies that for any surface there is a system of simple closed curves which cut the surface into pairs of pants. This is called a pants decomposition for the surface, and the curves are called the cuffs of the decomposition. This decomposition is not unique, but by quantifying the argument one sees that all pants decompositions of a given surface have the same number of curves, which is exactly the complexity. For connected surfaces a pants decomposition has exactly pants.
A collection of simple closed curves on a surface is a pants decomposition if and only if they are disjoint, no two of them are homotopic and none is homotopic to a boundary component, and the collection is maximal for these properties.
The pants complex
A given surface has infinitely many distinct pants decompositions (we understand two decompositions to be distinct when they are not homotopic). One way to try to understand the relations between all these decompositions is the pants complex associated to the surface. This is a graph with vertex set the pants decompositions of , and two vertices are joined if they are related by an elementary move, which is one of the two following operations:
take a curve in the decomposition in a one-holed torus and replace it by a curve in the torus intersecting it only once,
take a curve in the decomposition in a four-holed sphere and replace it by a curve in the sphere intersecting it only twice.
The pants complex is connected (meaning any two pants decompositions are related by a sequence of elementary moves) and has infinite diameter (meaning that there is no upper bound on the number of moves needed to get from one decomposition to the other). In the particular case when the surface has complexity 1, the pants complex is isomorphic to the Farey graph.
The action of the mapping class group on the pants complex is of interest for studying this group. For example, Allen Hatcher and William Thurston have used it to give a proof of the fact that it is finitely presented.
Pants in hyperbolic geometry
Moduli space of hyperbolic pants
The interesting hyperbolic structures on a pair of pants are easily classified.
For all there is a hyperbolic surface which is homeomorphic to a pair of pants and whose boundary components are simple closed geodesics of lengths equal to . Such a surface is uniquely determined by the up to isometry.
By taking the length of a cuff to be equal to zero, one obtains a complete metric on the pair of pants minus the cuff, which is replaced by a cusp. This structure is of finite volume.
Pants and hexagons
The geometric proof of the classification in the previous paragraph is important for understanding the structure of hyperbolic pants. It proceeds as follows: Given a hyperbolic pair of pants with totally geodesic boundary, there exist three unique geodesic arcs that join the cuffs pairwise and that are perpendicular to them at their endpoints. These arcs are called the seams of the pants.
Cutting the pants along the seams, one gets two right-angled hyperbolic hexagons which have three alternate sides of matching lengths. The following lemma can be proven with elementary hyperbolic geometry.
If two right-angled hyperbolic hexagons each have three alternate sides of matching lengths, then they are isometric to each other.
So we see that the pair of pants is the double of a right-angled hexagon along alternate sides. Since the isometry class of the hexagon is also uniquely determined by the lengths of the remaining three alternate sides, the classification of pants follows from that of hexagons.
When a length of one cuff is zero one replaces the corresponding side in the right-angled hexagon by an ideal vertex.
Fenchel-Nielsen coordinates
A point in the Teichmüller space of a surface is represented by a pair where is a complete hyperbolic surface and a diffeomorphism.
If has a pants decomposition by curves then one can parametrise Teichmüller pairs by the Fenchel-Nielsen coordinates which are defined as follows. The cuff lengths are simply the lengths of the closed geodesics homotopic to the .
The twist parameters are harder to define. They correspond to how much one turns when gluing two pairs of pants along : this defines them modulo . One can refine the definition (using either analytic continuation or geometric techniques) to obtain twist parameters valued in (roughly, the point is that when one makes a full turn one changes the point in Teichmüller space by precomposing with a Dehn twist around ).
The pants complex and the Weil-Petersson metric
One can define a map from the pants complex to Teichmüller space, which takes a pants decomposition to an arbitrarily chosen point in the region where the cuff part of the Fenchel-Nielsen coordinates are bounded by a large enough constant. It is a quasi-isometry when Teichmüller space is endowed with the Weil-Petersson metric, which has proven useful in the study of this metric.
Pairs of pants and Schottky groups
These structures correspond to Schottky groups on two generators (more precisely, if the quotient of the hyperbolic plane by a Schottky group on two generators is homeomorphic to the interior of a pair of pants then its convex core is an hyperbolic pair of pants as described above, and all are obtained as such).
2-dimensional cobordisms
A cobordism between two n-dimensional closed manifolds is a compact (n+1)-dimensional manifold whose boundary is the disjoint union of the two manifolds. The category of cobordisms of dimension n+1 is the category with objects the closed manifolds of dimension n, and morphisms the cobordisms between them (note that the definition of a cobordism includes the identification of the boundary to the manifolds). Note that one of the manifolds can be empty; in particular a closed manifold of dimension n+1 is viewed as an endomorphism of the empty set. One can also compose two cobordisms when the end of the first is equal to the start of the second. A n-dimensional topological quantum field theory (TQFT) is a monoidal functor from the category of n-cobordisms to the category of complex vector space (where multiplication is given by the tensor product).
In particular, cobordisms between 1-dimensional manifolds (which are unions of circles) are compact surfaces whose boundary has been separated into two disjoint unions of circles. Two-dimensional TQFTs correspond to Frobenius algebras, where the circle (the only connected closed 1-manifold) maps to the underlying vector space of the algebra, while the pair of pants gives a product or coproduct, depending on how the boundary components are grouped – which is commutative or cocommutative. Further, the map associated with a disk gives a counit (trace) or unit (scalars), depending on grouping of boundary, which completes the correspondence.
Notes
References
Topology
Hyperbolic geometry
Geometry processing | Pair of pants (mathematics) | [
"Physics",
"Mathematics"
] | 1,846 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
2,814,021 | https://en.wikipedia.org/wiki/Mid-range | In statistics, the mid-range or mid-extreme is a measure of central tendency of a sample defined as the arithmetic mean of the maximum and minimum values of the data set:
The mid-range is closely related to the range, a measure of statistical dispersion defined as the difference between maximum and minimum values.
The two measures are complementary in sense that if one knows the mid-range and the range, one can find the sample maximum and minimum values.
The mid-range is rarely used in practical statistical analysis, as it lacks efficiency as an estimator for most distributions of interest, because it ignores all intermediate points, and lacks robustness, as outliers change it significantly. Indeed, for many distributions it is one of the least efficient and least robust statistics. However, it finds some use in special cases: it is the maximally efficient estimator for the center of a uniform distribution, trimmed mid-ranges address robustness, and as an L-estimator, it is simple to understand and compute.
Robustness
The midrange is highly sensitive to outliers and ignores all but two data points. It is therefore a very non-robust statistic, having a breakdown point of 0, meaning that a single observation can change it arbitrarily. Further, it is highly influenced by outliers: increasing the sample maximum or decreasing the sample minimum by x changes the mid-range by while it changes the sample mean, which also has breakdown point of 0, by only It is thus of little use in practical statistics, unless outliers are already handled.
A trimmed midrange is known as a – the n% trimmed midrange is the average of the n% and (100−n)% percentiles, and is more robust, having a breakdown point of n%. In the middle of these is the midhinge, which is the 25% midsummary. The median can be interpreted as the fully trimmed (50%) mid-range; this accords with the convention that the median of an even number of points is the mean of the two middle points.
These trimmed midranges are also of interest as descriptive statistics or as L-estimators of central location or skewness: differences of midsummaries, such as midhinge minus the median, give measures of skewness at different points in the tail.
Efficiency
Despite its drawbacks, in some cases it is useful: the midrange is a highly efficient estimator of μ, given a small sample of a sufficiently platykurtic distribution, but it is inefficient for mesokurtic distributions, such as the normal.
For example, for a continuous uniform distribution with unknown maximum and minimum, the mid-range is the uniformly minimum-variance unbiased estimator (UMVU) estimator for the mean. The sample maximum and sample minimum, together with sample size, are a sufficient statistic for the population maximum and minimum – the distribution of other samples, conditional on a given maximum and minimum, is just the uniform distribution between the maximum and minimum and thus add no information. See German tank problem for further discussion. Thus the mid-range, which is an unbiased and sufficient estimator of the population mean, is in fact the UMVU: using the sample mean just adds noise based on the uninformative distribution of points within this range.
Conversely, for the normal distribution, the sample mean is the UMVU estimator of the mean. Thus for platykurtic distributions, which can often be thought of as between a uniform distribution and a normal distribution, the informativeness of the middle sample points versus the extrema values varies from "equal" for normal to "uninformative" for uniform, and for different distributions, one or the other (or some combination thereof) may be most efficient. A robust analog is the trimean, which averages the midhinge (25% trimmed mid-range) and median.
Small samples
For small sample sizes (n from 4 to 20) drawn from a sufficiently platykurtic distribution (negative excess kurtosis, defined as γ2 = (μ4/(μ2)²) − 3), the mid-range is an efficient estimator of the mean μ. The following table summarizes empirical data comparing three estimators of the mean for distributions of varied kurtosis; the modified mean is the truncated mean, where the maximum and minimum are eliminated.
For n = 1 or 2, the midrange and the mean are equal (and coincide with the median), and are most efficient for all distributions. For n = 3, the modified mean is the median, and instead the mean is the most efficient measure of central tendency for values of γ2 from 2.0 to 6.0 as well as from −0.8 to 2.0.
Sampling properties
For a sample of size n from the standard normal distribution, the mid-range M is unbiased, and has a variance given by:
For a sample of size n from the standard Laplace distribution, the mid-range M is unbiased, and has a variance given by:
and, in particular, the variance does not decrease to zero as the sample size grows.
For a sample of size n from a zero-centred uniform distribution, the mid-range M is unbiased, nM has an asymptotic distribution which is a Laplace distribution.
Deviation
While the mean of a set of values minimizes the sum of squares of deviations and the median minimizes the average absolute deviation, the midrange minimizes the maximum deviation (defined as ): it is a solution to a variational problem.
See also
Range (statistics)
Midhinge
References
Means
Summary statistics | Mid-range | [
"Physics",
"Mathematics"
] | 1,196 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
2,814,326 | https://en.wikipedia.org/wiki/Effective%20atomic%20number%20%28compounds%20and%20mixtures%29 | The atomic number of a material exhibits a strong and fundamental relationship with the nature of radiation interactions within that medium. There are numerous mathematical descriptions of different interaction processes that are dependent on the atomic number, . When dealing with composite media (i.e. a bulk material composed of more than one element), one therefore encounters the difficulty of defining . An effective atomic number in this context is equivalent to the atomic number but is used for compounds (e.g. water) and mixtures of different materials (such as tissue and bone). This is of most interest in terms of radiation interaction with composite materials. For bulk interaction properties, it can be useful to define an effective atomic number for a composite medium and, depending on the context, this may be done in different ways. Such methods include (i) a simple mass-weighted average, (ii) a power-law type method with some (very approximate) relationship to radiation interaction properties or (iii) methods involving calculation based on interaction cross sections. The latter is the most accurate approach (Taylor 2012), and the other more simplified approaches are often inaccurate even when used in a relative fashion for comparing materials.
In many textbooks and scientific publications, the following - simplistic and often dubious - sort of method is employed. One such proposed formula for the effective atomic number, , is as follows:
where
is the fraction of the total number of electrons associated with each element, and
is the atomic number of each element.
An example is that of water (H2O), made up of two hydrogen atoms (Z=1) and one oxygen atom (Z=8), the total number of electrons is 1+1+8 = 10, so the fraction of electrons for the two hydrogens is (2/10) and for the one oxygen is (8/10). So the for water is:
The effective atomic number is important for predicting how photons interact with a substance, as certain types of photon interactions depend on the atomic number. The exact formula, as well as the exponent 2.94, can depend on the energy range being used. As such, readers are reminded that this approach is of very limited applicability and may be quite misleading.
This 'power law' method, while commonly employed, is of questionable appropriateness in contemporary scientific applications within the context of radiation interactions in heterogeneous media. This approach dates back to the late 1930s when photon sources were restricted to low-energy x-ray units. The exponent of 2.94 relates to an empirical formula for the photoelectric process which incorporates a ‘constant’ of 2.64 × 10−26, which is in fact not a constant but rather a function of the photon energy. A linear relationship between Z2.94 has been shown for a limited number of compounds for low-energy x-rays, but within the same publication it is shown that many compounds do not lie on the same trendline. As such, for polyenergetic photon sources (in particular, for applications such as radiotherapy), the effective atomic number varies significantly with energy. It is possible to obtain a much more accurate single-valued by weighting against the spectrum of the source. The effective atomic number for electron interactions may be calculated with a similar approach. The cross-section based approach for determining Zeff is obviously much more complicated than the simple power-law approach described above, and this is why freely-available software has been developed for such calculations.
References
Eisberg and Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles.
Atomic physics | Effective atomic number (compounds and mixtures) | [
"Physics",
"Chemistry"
] | 733 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
2,814,347 | https://en.wikipedia.org/wiki/Programming%20complexity | Programming complexity (or software complexity) is a term that includes software properties that affect internal interactions. Several commentators distinguish between the terms "complex" and "complicated". Complicated implies being difficult to understand, but ultimately knowable. Complex, by contrast, describes the interactions between entities. As the number of entities increases, the number of interactions between them increases exponentially, making it impossible to know and understand them all. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions, thus increasing the risk of introducing defects when changing the software. In more extreme cases, it can make modifying the software virtually impossible.
The idea of linking software complexity to software maintainability has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution. He and his co-author Les Belady explored numerous software metrics that could be used to measure the state of software, eventually concluding that the only practical solution is to use deterministic complexity models.
Types
The complexity of an existing program determines the complexity of changing the program. Problem complexity can be divided into two categories:
Accidental complexity relates to difficulties a programmer faces due to the software engineering tools. Selecting a better tool set or a higher-level programming language may reduce it. Accidental complexity often results from not using the domain to frame the form of the solution. Domain-driven design can help minimize accidental complexity.
Essential complexity is caused by the characteristics of the problem to be solved and cannot be reduced.
Measures
Several measures of software complexity have been proposed. Many of these, although yielding a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are
McCabe's cyclomatic complexity metric
Halstead's software science metrics
Henry and Kafura introduced "Software Structure Metrics Based on Information Flow" in 1981, which measures complexity as a function of "fan-in" and "fan-out". They define fan-in of a procedure as the number of local flows into that procedure plus the number of data structures from which that procedure retrieves information. Fan-out is defined as the number of local flows out of that procedure plus the number of data structures that the procedure updates. Local flows relate to data passed to, and from procedures that call or are called by, the procedure in question. Henry and Kafura's complexity value is defined as "the procedure length multiplied by the square of fan-in multiplied by fan-out" (Length ×(fan-in × fan-out)²).
Chidamber and Kemerer introduced "A Metrics Suite for Object-Oriented Design" in 1994, focusing on metrics for object-oriented code. They introduce six OO complexity metrics: (1) weighted methods per class; (2) coupling between object classes; (3) response for a class; (4) number of children; (5) depth of inheritance tree; and (6) lack of cohesion of methods.
Several other metrics can be used to measure programming complexity:
Branching complexity (Sneed Metric)
Data access complexity (Card Metric)
Data complexity (Chapin Metric)
Data flow complexity (Elshof Metric)
Decisional complexity (McClure Metric)
Path Complexity (Bang Metric)
Tesler's Law is an adage in human–computer interaction stating that every application has an inherent amount of complexity that cannot be removed or hidden.
Chidamber and Kemerer Metrics
Chidamber and Kemerer proposed a set of programing complexity metrics widely used in measurements and academic articles: weighted methods per class, coupling between object classes, response for a class, number of children, depth of inheritance tree, and lack of cohesion of methods, described below:
Weighted methods per class ("WMC")
n is the number of methods on the class
is the complexity of the method
Coupling between object classes ("CBO")
number of other class which is coupled (using or being used)
Response for a class ("RFC")
where
is set of methods called by method i
is the set of methods in the class
Number of children ("NOC")
sum of all classes that inherit this class or a descendant of it
Depth of inheritance tree ("DIT")
maximum depth of the inheritance tree for this class
Lack of cohesion of methods ("LCOM")
Measures the intersection of the attributes used in common by the class methods
Where
And
With is the set of attributes (instance variables) accessed (read from or written to) by the -th method of the class
See also
Programming paradigm
Software crisis
References
Software metrics
Complex systems theory | Programming complexity | [
"Mathematics",
"Engineering"
] | 949 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
31,425,048 | https://en.wikipedia.org/wiki/Dynamic%20similarity%20%28Reynolds%20and%20Womersley%20numbers%29 | In fluid mechanics, dynamic similarity is the phenomenon that when there are two geometrically similar vessels (same shape, different sizes) with the same boundary conditions (e.g., no-slip, center-line velocity) and the same Reynolds and Womersley numbers, then the fluid flows will be identical. This can be seen from inspection of the underlying Navier-Stokes equation, with geometrically similar bodies, equal Reynolds and Womersley Numbers the functions of velocity (u’,v’,w’) and pressure (P’) for any variation of flow.
Derivation
The Reynolds number and the Womersley number are the only two physical parameters necessary to solve an incompressible fluid flow problem. The Reynolds number is given by:
The terms of the equation itself represent the following:
.
When the Reynolds number is large, it shows that the flow is dominated by convective inertial effects; When the Reynolds Number is small, it shows that the flow is dominated by shear effects.
The Womersley number is given by:
,
which is simply the square-root of the Stokes Number; the terms of the equation itself represent the following:
.
When the Womersley number is large (around 10 or greater), it shows that the flow is dominated by oscillatory inertial forces and that the velocity profile is flat. When the Womersley parameter is low, viscous forces tend to
dominate the flow, velocity profiles are parabolic in shape, and the center-line velocity oscillates in phase with the driving pressure gradient.
Starting with Navier–Stokes equation for Cartesian flow:
.
The terms of the equation itself represent the following:
Ignoring gravitational forces and dividing the equation by density () yields:
,
where is the kinematic viscosity. Since both the Reynolds and Womersley numbers are dimensionless, Navier-Stokes must be represented as a dimensionless expression as well. Choosing , , and as a characteristic velocity, frequency, and length respectively yields dimensionless variables:
Dimensionless Length Term (same for y' and z'):,
Dimensionless Velocity Term (same for v' and w'): ,
Dimensionless Pressure Term: ,
Dimensionless Time Term: .
Dividing the Navier-Stokes equation by (Convective Inertial Force term) gives:
,
With the addition of the dimensionless continuity equation (seen below) in any incompressible fluid flow problem the Reynolds and Womersley numbers are the only two physical parameters that are in the two equations:
,
Boundary layer thickness
The Reynolds and Womersley Numbers are also used to calculate the thicknesses of the boundary layers that can form from the fluid flow’s viscous effects. The Reynolds number is used to calculate the convective inertial boundary layer thickness that can form, and the Womersley number is used to calculate the transient inertial boundary thickness that can form. From the Womersley number it can be shown that the transient inertia force is represented by , and from the last term in the non-modified Navier-Stokes equation that viscous force is represented by (subscript one indicates that the boundary layer thickness is that of the transient boundary layer). Setting the two forces equal to each other yields:
Solving for yields:
Adding a characteristic length (L) to both sides gives the ratio:
Therefore, it can be seen that when the flow has a high Womersley Number the transient boundary layer thickness is very small, when compared to the characteristic length, which for circular vessels is the radius. As shown earlier the convective inertial force is represented by the term ; equating that to the viscous force term yields:
Solving for the convective boundary layer thickness yields:
Factoring in a characteristic length gives the ratio:
From the equation it is shown that for a flow with a large Reynolds Number there will be a correspondingly small convective boundary layer compared to the vessel’s characteristic length. By knowing the Reynolds and Womersley numbers for a given flow it is possible to calculate both the transient and the convective boundary layer thicknesses, and relate them to a flow in another system. The boundary layer thickness is also useful in knowing when the fluid can be treated as an ideal fluid. This is at a distance that is larger than both boundary layer thicknesses.
See also
Similitude
Dimensionless Number
References
Dimensionless numbers of physics
Fluid dynamics
Biomechanics | Dynamic similarity (Reynolds and Womersley numbers) | [
"Physics",
"Chemistry",
"Engineering"
] | 912 | [
"Biomechanics",
"Chemical engineering",
"Mechanics",
"Piping",
"Fluid dynamics"
] |
31,430,040 | https://en.wikipedia.org/wiki/Model-dependent%20realism | Model-dependent realism is a view of scientific inquiry that focuses on the role of scientific models of phenomena. It claims reality should be interpreted based upon these models, and where several models overlap in describing a particular subject, multiple, equally valid, realities exist. It claims that it is meaningless to talk about the "true reality" of a model as we can never be absolutely certain of anything. The only meaningful thing is the usefulness of the model. The term "model-dependent realism" was coined by Stephen Hawking and Leonard Mlodinow in their 2010 book, The Grand Design.
Overview
Model-dependent realism asserts that all we can know about "reality" consists of networks of world pictures that explain observations by connecting them by rules to concepts defined in models. Will an ultimate theory of everything be found? Hawking and Mlodinow suggest it is unclear:
A world picture consists of the combination of a set of observations accompanied by a conceptual model and by rules connecting the model concepts to the observations. Different world pictures that describe particular data equally well all have equal claims to be valid. There is no requirement that a world picture be unique, or even that the data selected include all available observations. The universe of all observations at present is covered by a network of overlapping world pictures and, where overlap occurs; multiple, equally valid, world pictures exist. At present, science requires multiple models to encompass existing observations:
Where several models are found for the same phenomena, no single model is preferable to the others within that domain of overlap.
Model selection
While not rejecting the idea of "reality-as-it-is-in-itself", model-dependent realism suggests that we cannot know "reality-as-it-is-in-itself", but only an approximation of it provided by the intermediary of models. The view of models in model-dependent realism also is related to the instrumentalist approach to modern science, that a concept or theory should be evaluated by how effectively it explains and predicts phenomena, as opposed to how accurately it describes objective reality (a matter possibly impossible to establish). A model is a good model if it:
Is elegant
Contains few arbitrary or adjustable elements
Agrees with and explains all existing observations
Makes detailed predictions about future observations that can disprove or falsify the model if they are not borne out.
"If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model." Of course, an assessment like that is subjective, as are the other criteria. According to Hawking and Mlodinow, even very successful models in use today do not satisfy all these criteria, which are aspirational in nature.
See also
All models are wrong
Commensurability
Conceptualist realism
Constructivist epistemology
Fallibilism
Internal realism
Instrumentalism
Models of scientific inquiry
Ontological pluralism
Philosophical realism
Pragmatism
Scientific perspectivism
Scientific realism
Space mapping
References
Further reading
An on-line excerpt stating Kuhn's criteria is found here and they also are discussed by
External links
Edwards, Chris. Stephen Hawking's other controversial theory: Model Dependent Realism in The Grand Design (critical essay), Skeptic (Altadena, CA), March 22, 2011
Philosophy of physics
Metatheory of science
Metaphysical realism
Scientific modelling | Model-dependent realism | [
"Physics"
] | 673 | [
"Philosophy of physics",
"Applied and interdisciplinary physics"
] |
20,251,284 | https://en.wikipedia.org/wiki/Information%20gain%20ratio | In decision tree learning, information gain ratio is a ratio of information gain to the intrinsic information. It was proposed by Ross Quinlan, to reduce a bias towards multi-valued attributes by taking the number and size of branches into account when choosing
an attribute.
Information gain is also known as mutual information.
Information gain calculation
Information gain is the reduction in entropy produced from partitioning a set with attributes and finding the optimal candidate that produces the highest value:
where is a random variable and is the entropy of given the value of attribute .
The information gain is equal to the total entropy for an attribute if for each of the attribute values a unique classification can be made for the result attribute. In this case the relative entropies subtracted from the total entropy are 0.
Split information calculation
The split information value for a test is defined as follows:
where is a discrete random variable with possible values and
being the number of times that occurs divided by the total count of events where is the set of events.
The split information value is a positive number that describes the potential worth of splitting a branch from a node. This in turn is the intrinsic value that the random variable possesses and will be used to remove the bias in the information gain ratio calculation.
Information gain ratio calculation
The information gain ratio is the ratio between the information gain and the split information value:
Example
Using weather data published by Fordham University, the table was created below:
Using the table above, one can find the entropy, information gain, split information, and information gain ratio for each variable (outlook, temperature, humidity, and wind). These calculations are shown in the tables below:
Using the above tables, one can deduce that Outlook has the highest information gain ratio. Next, one must find the statistics for the sub-groups of the Outlook variable (sunny, overcast, and rainy), for this example one will only build the sunny branch (as shown in the table below):
One can find the following statistics for the other variables (temperature, humidity, and wind) to see which have the greatest effect on the sunny element of the outlook variable:
Humidity was found to have the highest information gain ratio. One will repeat the same steps as before and find the statistics for the events of the Humidity variable (high and normal):
Since the play values are either all "No" or "Yes", the information gain ratio value will be equal to 1. Also, now that one has reached the end of the variable chain with Wind being the last variable left, they can build an entire root to leaf node branch line of a decision tree.
Once finished with reaching this leaf node, one would follow the same procedure for the rest of the elements that have yet to be split in the decision tree. This set of data was relatively small, however, if a larger set was used, the advantages of using the information gain ratio as the splitting factor of a decision tree can be seen more.
Advantages
Information gain ratio biases the decision tree against considering attributes with a large number of distinct values.
For example, suppose that we are building a decision tree for some data describing a business's customers. Information gain ratio is used to decide which of the attributes are the most relevant. These will be tested near the root of the tree. One of the input attributes might be the customer's telephone number. This attribute has a high information gain, because it uniquely identifies each customer. Due to its high amount of distinct values, this will not be chosen to be tested near the root.
Disadvantages
Although information gain ratio solves the key problem of information gain, it creates another problem. If one is considering an amount of attributes that have a high number of distinct values, these will never be above one that has a lower number of distinct values.
Difference from information gain
Information gain's shortcoming is created by not providing a numerical difference between attributes with high distinct values from those that have less.
Example: Suppose that we are building a decision tree for some data describing a business's customers. Information gain is often used to decide which of the attributes are the most relevant, so they can be tested near the root of the tree. One of the input attributes might be the customer's credit card number. This attribute has a high information gain, because it uniquely identifies each customer, but we do not want to include it in the decision tree: deciding how to treat a customer based on their credit card number is unlikely to generalize to customers we haven't seen before.
Information gain ratio's strength is that it has a bias towards the attributes with the lower number of distinct values.
Below is a table describing the differences of information gain and information gain ratio when put in certain scenarios.
See also
Information gain in decision trees
Entropy (information theory)
References
Decision trees
Classification algorithms
Entropy and information
Statistical ratios | Information gain ratio | [
"Physics",
"Mathematics"
] | 975 | [
"Dynamical systems",
"Entropy",
"Physical quantities",
"Entropy and information"
] |
20,252,949 | https://en.wikipedia.org/wiki/Construction%20estimating%20software | Construction cost estimating software is computer software designed for contractors to estimate construction costs for a specific project. A cost estimator will typically use estimating software to estimate their bid price for a project, which will ultimately become part of a resulting construction contract. Some architects, engineers, construction managers, and others may also use cost estimating software to prepare cost estimates for purposes other than bidding such as budgeting and insurance claims.
Methods
Traditional methods
Construction contractors usually prepare bids or tenders to compete for a contract award for a project. To prepare the bid, first a cost estimate is prepared to determine the costs and then establish the price(s). This involves reviewing the project's plans and specifications to produce a take-off or quantity survey, which is a listing of all the materials and items of work required for a construction project by the construction documents. Together with prices for these components, the measured quantities are the basis for calculation of the direct cost. Indirect costs and profit are added to arrive at a total amount.
Spreadsheets
Cost estimators used columnar sheets of paper to organize the take-off and the estimate itself into rows of items and columns containing the description, quantity and the pricing components. Some of these were similar to accounting ledger paper. They became known as green sheets or spreadsheets.
With the advent of computers in business, estimators began using spreadsheet applications like VisiCalc, Lotus 1-2-3, and Microsoft Excel to duplicate the traditional tabular format, while automating redundant mathematical formulas.
Many construction cost estimators continue to rely primarily upon manual methods, hard copy documents, and/or electronic spreadsheets such as Microsoft Excel. While spreadsheets are relatively easy to master and provide a means to create and report a construction cost estimate and or cost models, their benefit comes largely from their ability to partially relieve estimators of mundane calculations. Accuracy, however, is not necessarily improved and productivity is not maximized. For example, data entry remains tedious and prone to error, formula errors are common, and collaboration and information sharing are limited.
Commercial estimating software
As cost estimators came to rely heavily on spreadsheets, and the formulas within the spreadsheets became more complex, spreadsheet errors became more frequent. These were typically formula errors and cell-reference errors which would often lead to cost overruns. As a result, commercial cost estimating software applications were originally created to overcome these errors by using hard-coded formulas and data structures. Other benefits include the use of reference to cost databases (aka "cost books") and other data, predictable and professional looking reports, speed, accuracy, and overall process standardization.
As cost estimating programs became more and more popular over the years, more advanced features, such as saving data for reuse, mass project-wide changes, and trade-specific calculations, have become available. For example, programs that are designed for building construction, include libraries and program features for traditional builders. In sharp contrast, programs that are designed for civil construction, include libraries and program features for roadway, utility, and bridge builders.
Sophisticated, cost estimating and Efficient Project Delivery Software systems are also available to integrate various construction delivery methods such as Integrated Project Delivery, Job Order Contracting, and others (IDIQ, JOC, SABER...) simultaneously and securely. These systems enable cost estimators and project managers to collaboratively work with multiple projects, multiple estimates, and multiple contracts. A 'short list' of additional capabilities includes the ability to work with multiple cost books/guides/UPBs, track project status, automatically compare estimates, easily copy/paste, clone, and reuse estimates, and integrated sophisticated visual estimating and quantity take-off (QTO) tools. Owners, contractors, architects and engineers are moving to advanced cost estimating and management systems, and many oversight groups such are beginning to also require their use. The level of collaboration, transparency, and information re-use enabled by Cost Estimating and Efficient Project Delivery Software drives 15-25%+ reductions in procurement cycles, six to ten times faster estimating, reduce overall project times, as a significant reduction in change orders and the virtual elimination of contract related legal disputes.
Typical features
Three functions prove to be the most critical when buying cost estimating software:
Takeoff software – this provides for measurement from paper or electronic plans.
Built-in cost databases – this provides reference cost data which may be your own or may come from a commercial source, such as RS Means
Estimating worksheets – these are the spreadsheets where the real work takes place, supported by calculations and other features
Other typical features include:
Item or Activity List: All estimating software applications will include a main project window that outlines the various items or activities that will be required to complete the specified project. More advanced programs are capable of breaking an item up into sub-tasks, or sublevels. An outline view of all the top-level and sublevel items provides a quick and easy way to view and navigate through the project.
Resource Costs: Resources consist of labor, equipment, materials, subcontractors, trucking, and any other cost detail items. Labor and equipment costs are internal crew costs, whereas all other resource costs are received from vendors, such as material suppliers, subcontractors, and trucking companies. Labor costs are usually calculated from wages, benefits, burden, and workers' compensation. Equipment costs are calculated from purchase price, taxes, fuel consumption, and other operating expenses.
Item or Activity Detail: The detail to each item includes all the resources required to complete each activity, as well as their associated costs. Production rates will automatically determine required crew costs.
Calculations: Most estimating programs have built-in calculations ranging from simple length, area, and volume calculations to complex industry-specific calculations, such as electrical calculations, utility trench calculations, and earthwork cut and fill calculations.
Markups: Every program allows for cost mark-ups ranging from flat overall mark-ups to resource-specific mark-ups, mark-ups for general administrative costs, and bonding costs.
Detailed Overhead: Indirect costs, such as permits, fees, and any other overall project costs can be spread to billable project items.
Closeout Window: Many estimating programs include a screen for manually adjusting bid prices from their calculated values.
Reporting: Project reports typically include proposals, detail reports, cost breakdown reports, and various charts and graphs.
Exporting: Most software programs can export project data to other applications, such as spreadsheets, accounting software, and project management software.
Job History: Storing past projects is a standard feature in most estimating programs.
References
Cost engineering
Cost Estimating
Cost analysis software | Construction estimating software | [
"Engineering"
] | 1,370 | [
"Cost engineering",
"Construction",
"Construction software"
] |
20,256,470 | https://en.wikipedia.org/wiki/Salcomine | Salcomine is a coordination complex derived from the salen ligand and cobalt. The complex, which is planar, and a variety of its derivatives are carriers for O2 as well as oxidation catalysts.
Preparation and structure
Salcomine is commercially available. It may be synthesized from cobalt(II) acetate and salenH2.
Salcomine crystallizes as a dimer. In this form, the cobalt centers achieve five-coordination via a bridging phenolate ligands. A monomeric form crystallizes with chloroform in the lattice. It features planar Co centers. Salcomine is both a Lewis acid and a reductant. Several solvated derivatives bind O2 to give derivatives of the type (μ-O2)[Co(salen)py]2 and [Co(salen)py(O2)].
Applications
The 1938 report that this compound reversibly bound O2 led to intensive research on this and related complexes for the storage or transport of oxygen. Solvated derivatives of salcomine, e.g. the chloroformate or the DMF adduct, bind 0.5 equivalent of O2:
2 Co(salen) + O2 → [Co(salen)]2O2
Salcomine catalyzes the oxidation of 2,6-disubstituted phenols by dioxygen.
References
Metal salen complexes
Cobalt compounds | Salcomine | [
"Chemistry"
] | 307 | [
"Coordination chemistry",
"Metal salen complexes"
] |
20,260,077 | https://en.wikipedia.org/wiki/Wind%20wave%20model | In fluid dynamics, wind wave modeling describes the effort to depict the sea state and predict the evolution of the energy of wind waves using numerical techniques. These simulations consider atmospheric wind forcing, nonlinear wave interactions, and frictional dissipation, and they output statistics describing wave heights, periods, and propagation directions for regional seas or global oceans. Such wave hindcasts and wave forecasts are extremely important for commercial interests on the high seas. For example, the shipping industry requires guidance for operational planning and tactical seakeeping purposes.
For the specific case of predicting wind wave statistics on the ocean, the term ocean surface wave model is used.
Other applications, in particular coastal engineering, have led to the developments of wind wave models specifically designed for coastal applications.
Historical overview
Early forecasts of the sea state were created manually based upon empirical relationships between the present state of the sea, the expected wind conditions, the fetch/duration, and the direction of the wave propagation. Alternatively, the swell part of the state has been forecasted as early as 1920 using remote observations.
During the 1950s and 1960s, much of the theoretical groundwork necessary for numerical descriptions of wave evolution was laid. For forecasting purposes, it was realized that the random nature of the sea state was best described by a spectral decomposition in which the energy of the waves was attributed to as many wave trains as necessary, each with a specific direction and period. This approach allowed to make combined forecasts of wind seas and swells. The first numerical model based on the spectral decomposition of the sea state was operated in 1956 by the French Weather Service, and focused on the North Atlantic. The 1970s saw the first operational, hemispheric wave model: the spectral wave ocean model (SWOM) at the Fleet Numerical Oceanography Center.
First generation wave models did not consider nonlinear wave interactions. Second generation models, available by the early 1980s, parameterized these interactions. They included the “coupled hybrid” and “coupled discrete” formulations. Third generation models explicitly represent all the physics relevant for the development of the sea state in two dimensions. The wave modeling project (WAM), an international effort, led to the refinement of modern wave modeling techniques during the decade 1984-1994.
Improvements included two-way coupling between wind and waves, assimilation of satellite wave data, and medium-range operational forecasting.
Wind wave models are used in the context of a forecasting or hindcasting system. Differences in model results arise (with decreasing order of importance) from: differences in wind and sea ice forcing, differences in parameterizations of physical processes, the use of data assimilation and associated methods, and the numerical techniques used to solve the wave energy evolution equation.
In the aftermath of World War II, the study of wave growth garnered significant attention. The global nature of the war, encompassing battles in the Pacific, Atlantic, and Mediterranean seas, necessitated the execution of landing operations on enemy-held coasts. Safe landing was paramount, given that choppy waters posed the danger of capsizing landing craft. Consequently, the precise forecasting of weather and wave conditions became essential, prompting the recruitment of meteorologists and oceanographers by the warring nations.
During this period, both Japan and the United States embarked on wave prediction research. In the U.S., comprehensive studies were carried out at the Scripps Institution of Oceanography affiliated with the University of California. Under the guidance of Harald Svedrup, Walter Munk devised an avant-garde wave calculation methodology for the United States Navy and later refined this approach for the Office of Naval Research.
This pioneering effort led to the creation of the significant wave method, which underwent subsequent refinements and data integrations. The method, in due course, came to be popularly referred to as the SMB method, an acronym derived from its founders Sverdrup, Munk, and Charles L. Bretschneider.
Between 1950 and 1980, various formulae were proposed. Given that two-dimensional field models had not been formulated during that time, studies were initiated in the Netherlands by Rijkswaterstaat and the (TAW - Technical Advisory Committee for Flood Defences) to discern the most appropriate formula to compute wave height at the base of a dike. This work concluded that the 1973 Bretschneider formula was the most suitable. However, subsequent studies by Young and Verhagen in 1997 suggested that adjusting certain coefficients enhanced the formula's efficacy in shallow water regions.
General strategy
Input
A wave model requires as initial conditions information describing the state of the sea. An analysis of the sea or ocean can be created through data assimilation, where observations such as buoy or satellite altimeter measurements are combined with a background guess from a previous forecast or climatology to create the best estimate of the ongoing conditions. In practice, many forecasting system rely only on the previous forecast, without any assimilation of observations.
A more critical input is the "forcing" by wind fields: a time-varying map of wind speed and directions. The most common sources of errors in wave model results are the errors in the wind field. Ocean currents can also be important, in particular in western boundary currents such as the Gulf Stream, Kuroshio or Agulhas current, or in coastal areas where tidal currents are strong. Waves are also affected by sea ice and icebergs, and all operational global wave models take at least the sea ice into account.
Representation
The sea state is described as a spectrum; the sea surface can be decomposed into waves of varying frequencies using the principle of superposition. The waves are also separated by their direction of propagation. The model domain size can range from regional to the global ocean. Smaller domains can be nested within a global domain to provide higher resolution in a region of interest. The sea state evolves according to physical equations – based on a spectral representation of the conservation of wave action – which include: wave propagation / advection, refraction (by bathymetry and currents), shoaling, and a source function which allows for wave energy to be augmented or diminished. The source function has at least three terms: wind forcing, nonlinear transfer, and dissipation by whitecapping. Wind data are typically provided from a separate atmospheric model from an operational weather forecasting center.
For intermediate water depths the effect of bottom friction should also be added. At ocean scales, the dissipation of swells - without breaking - is a very important term.
Output
The output of a wind wave model is a description of the wave spectra, with amplitudes associated with each frequency and propagation direction. Results are typically summarized by the significant wave height, which is the average height of the one-third largest waves, and the period and propagation direction of the dominant wave.
Coupled models
Wind waves also act to modify atmospheric properties through frictional drag of near-surface winds and heat fluxes. Two-way coupled models allow the wave activity to feed back upon the atmosphere. The European Centre for Medium-Range Weather Forecasts (ECMWF) coupled atmosphere-wave forecast system described below facilitates this through exchange of the Charnock parameter which controls the sea surface roughness. This allows the atmosphere to respond to changes in the surface roughness as the wind sea builds up or decays.
Examples
WAVEWATCH
The operational wave forecasting systems at NOAA are based on the WAVEWATCH III model. This system has a global domain of approximately 50 km resolution, with nested regional domains for the northern hemisphere oceanic basins at approximately 18 km and approximately 7 km resolution. Physics includes wave field refraction, nonlinear resonant interactions, sub-grid representations of unresolved islands, and dynamically updated ice coverage. Wind data is provided from the GDAS data assimilation system for the GFS weather model. Up to 2008, the model was limited to regions outside the surf zone where the waves are not strongly impacted by shallow depths.
The model can incorporate the effects of currents on waves from its early design by Hendrik Tolman in the 1990s, and is now extended for near shore applications.
WAM
The wave model WAM was the first so-called third generation prognostic wave model where the two-dimensional wave spectrum was allowed to evolve freely (up to a cut-off frequency) with no constraints on the spectral shape. The model underwent a series of software updates from its inception in the late 1980s. The last official release is Cycle 4.5, maintained by the German Helmholtz Zentrum, Geesthacht.
ECMWF has incorporated WAM into its deterministic and ensemble forecasting system., known as the Integrated Forecast System (IFS). The model currently comprises 36 frequency bins and 36 propagation directions at an average spatial resolution of 25 km. The model has been coupled to the atmospheric component of IFS since 1998.
Other models
Wind wave forecasts are issued regionally by Environment Canada.
Regional wave predictions are also produced by universities, such as Texas A&M University’s use of the SWAN model (developed by Delft University of Technology) to forecast waves in the Gulf of Mexico.
Another model, CCHE2D-COAST is a processes-based integrated model which is capable of simulating coastal processes in different coasts with complex shorelines such as irregular wave deformation from offshore to onshore, nearshore currents induced by radiation stresses, wave set-up, wave set-down, sediment transport, and seabed morphological changes.
Other wind wave models include the U.S. Navy Standard Surf Model (NSSM).
The formulae of Bretschneider, Wilson, and Young & Verhagen
For determining wave growth in deep waters subjected to prolonged fetch, the basic formula set is:
Where:
= gravitational acceleration (m/s2)
= significant wave height (m)
= significant wave period (s)
= wind speed (m/s)
The constants in these formulas are deduced from empirical data. Factoring in water depth, wind fetch, and storm duration complicates the equations considerably. However, the application of dimensionless values facilitates the identification of patterns for all these variables. The dimensionless parameters employed are:
Where:
= water depth (m)
= wind fetch (m)
= storm duration (s)
When plotted against the dimensionless wind fetch, both dimensionless wave height and wave period tend to align linearly. However, this trend becomes notably more flattened for more extended dimensionless wind fetches. Various researchers have endeavoured to formulate equations capturing this observed behaviour.
Common Formulas for Deep Water
Bretschneider (1952, 1977):
Wilson (1965):
In the Netherlands, a formula devised by Groen & Dorrestein (1976) is also in common use:
for
for
for
During periods when programmable computers weren't commonly utilised, these formulas were cumbersome to use. Consequently, for practical applications, nomograms were developed which did away with dimensionless units, instead presenting wave heights in metres, storm duration in hours, and the wind fetch in km.
Integrating the water depth into the same chart was problematic as it introduced too many input parameters. Therefore, during the primary usage of nomograms, separate nomograms were crafted for distinct depths. The use of computers has resulted in reduced reliance on nomograms.
For deep water, the distinctions between the various formulas are subtle. However, for shallow water, the formula modified by Young & Verhagen proves more suitable. It's defined as:
and
and
and
Research by Bart demonstrated that, under Dutch conditions (for example, in the IJsselmeer), this formula is reliable.
Example: Lake Garda
Lake Garda in Italy is a deep, elongated lake, measuring about 350 m in depth and spanning 45 km in length. With a wind speed of 25 m/s from the SSW, the Bretschneider and Wilson formulas suggest an Hs of 3.5 m and a period of roughly 7 s (assuming the storm persists for at least 4 hours). The Young and Verhagen formula, however, predicts a lower wave height of 2.6 m. This diminished result is attributed to the formula's calibration for shallow waters, whilst Lake Garda is notably deep.
Bretschneider Formula: Lake Garda
Based on Bretschneider's formula:
Predicted wave height: 3.54 meters
Predicted wave period: 7.02 seconds
Wilson Formula: Lake Garda
Utilizing Wilson's formula, the predictions are:
Predicted wave height: 3.56 meters
Predicted wave period: 7.01 seconds
Young & Verhagen Formula: Lake Garda
Young & Verhagen's formula, which typically applies to shallow waters, yields:
Predicted wave height: 2.63 meters
Predicted wave period: 6.89 seconds
Shallow and coastal waters
Global wind wave models such as WAVEWATCH and WAM are not reliable in shallow water areas near the coast. To address this issue, the SWAN (Simulating WAves Nearshore) program was developed in 1993 by Delft University of Technology, in collaboration with Rijkswaterstaat and the Office of Naval Research in the United States. Initially, the main focus of this development was on wave changes due to the effects of breaking, refraction, and the like. The program was subsequently developed to include analysis of wave growth.
SWAN essentially calculates the energy of a wave field (in the form of a wave spectrum) and derives the significant wave height from this spectrum. SWAN lacks a user interface for easily creating input files and presenting the output. The program is open-source, and many institutions and companies have since developed their own user environments for SWAN. The program has become a global standard for such calculations, and can be used in both one-dimensional and two-dimensional modes.
One-dimensional approach
The computation time for a calculation with SWAN is in the order of seconds. In one-dimensional mode, results are available from the input of a cross-sectional profile and wind information. In many cases, this can yield a sufficiently reliable value for the local wave spectrum, particularly when the wind path crosses shallow areas.
Example: wave growth calculation in The Netherlands
As an example, a calculation of the wave growth in the Westerschelde has been made. For this example, the one-dimensional version of SWAN and the open-source user interface SwanOne were used. The wave height at the base of the sea dike near Goudorpe on South Beveland, just west of the Westerscheldetunnel, was calculated, with the wind coming from the SW at a speed of 25m/s (force 9 to 10). In the graph, this is from left to right. The dike is quite far from deep water, with a salt marsh in front of it.
The calculation was made for low water, average water level, and high water. At high tide, the salt marsh is under water; at low tide, only the salt marsh is submerged (the tidal difference here is about 5 metres). At high tide, there is a constant increase in wave height, which is faster in deep water than in shallow water. At low tide, some plates are dry, and wave growth has to start all over again. Close to the shore (beyond the Gat van Borssele), there's a tall salt marsh; at low tide, there are no waves there, at average tide, the wave height decreases to almost nothing at the dike, and at high tide, there's still a wave height of 1 m present. The measure of period shown in these graphs is the spectral period (Tm-1,0).
Two-dimensional approach
In situations where significant refraction occurs, or where the coastline is irregular, the one-dimensional method falls short, necessitating the use of a field model. Even in a relatively rectangular lake like Lake Garda, a two-dimensional calculation provides considerably more information, especially in its southern regions. The figure below demonstrates the results of such a calculation.
This case highlights another limitation of the one-dimensional approach: at certain points, the actual wave growth is less than predicted by the one-dimensional model. This discrepancy arises because the model assumes a broad wave field, which isn't the case for narrow lakes.
Validation
Comparison of the wave model forecasts with observations is essential for characterizing model deficiencies and identifying areas for improvement. In-situ observations are obtained from buoys, ships and oil platforms. Altimetry data from satellites, such as GEOSAT and TOPEX, can also be used to infer the characteristics of wind waves.
Hindcasts of wave models during extreme conditions also serves as a useful test bed for the models.
Reanalyses
A retrospective analysis, or reanalysis, combines all available observations with a physical model to describe the state of a system over a time period of decades. Wind waves are a part of both the NCEP Reanalysis and the ERA-40 from the ECMWF. Such resources permit the creation of monthly wave climatologies, and can track the variation of wave activity on interannual and multi-decadal time scales. During the northern hemisphere winter, the most intense wave activity is located in the central North Pacific south of the Aleutians, and in the central North Atlantic south of Iceland. During the southern hemisphere winter, intense wave activity circumscribes the pole at around 50°S, with 5 m significant wave heights typical in the southern Indian Ocean.
References
Physical oceanography
Water waves | Wind wave model | [
"Physics",
"Chemistry"
] | 3,589 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Waves",
"Physical oceanography",
"Fluid dynamics"
] |
20,261,953 | https://en.wikipedia.org/wiki/Effects%20of%20fatigue%20on%20safety | Fatigue is a major safety concern in many fields, but especially in transportation, because fatigue can result in disastrous accidents. Fatigue is considered an internal precondition for unsafe acts because it negatively affects the human operator's internal state. Research has generally focused on pilots, truck drivers, and shift workers.
Fatigue can be a symptom of a medical problem, but more commonly it is a normal physiological reaction to exertion, lack of sleep, boredom, changes to sleep-wake schedules (including jet lag), or stress.
In some cases, driving after 18–24 hours without sleep is equivalent to a blood alcohol content of 0.05%–0.10%.
Types
Fatigue can be both physical and mental. Physical fatigue is the inability to continue functioning at the level of one's normal abilities; a person with physical fatigue cannot lift as heavy a box or walk as far as he could if not fatigued.
Mental fatigue, on the other hand, rather manifests in sleepiness or slowness. A person with mental fatigue may fall asleep, may react very slowly, or may be inattentive. With microsleeps, the person may be unaware that he was asleep. Without proper amount of sleep, it will feel like certain tasks seem complicated, concentration will drop and ultimately result in fatal mistakes
Factors
The Federal Motor Carrier Safety Administration identifies three main factors in driver fatigue: Circadian rhythm effects, sleep deprivation and cumulative fatigue effects, and industrial or "time-on-task" fatigue.
Circadian rhythm effects describe the tendency for humans to experience a normal cycle in attentiveness and sleepiness through the 24-hour day. Those with a conventional sleep pattern (sleeping for seven or eight hours at night) experience periods of maximum fatigue in the early hours of the morning and a lesser period in the early afternoon. During the low points of this cycle, one experiences reduced attentiveness. During the high points, it is difficult to sleep soundly. The cycle is anchored in part by ambient lighting (darkness causes a person's body to release the hormone melatonin, which induces sleep), and by a person's imposed pattern of regular sleeping and waking times. The influence of the day-night cycle is never fully displaced (artificial lighting inhibits melatonin release more weakly than sunlight), and the performance of night shift workers usually suffers. Circadian rhythms are persistent, and can only be shifted by one to two hours forward or backward per day. Changing the starting time of a work shift by more than these amounts will reduce attentiveness, which is common after the first night shift following a "weekend" break during which conventional sleep times were followed. This effect can also be seen in non-shift-workers who revert to a later schedule on the weekend and experience fatigue and sleepiness when returning to work early on Monday morning. The effects of sleep deprivation vary substantially from person to person.
Sleep deprivation and cumulative fatigue effects describe how individuals who fail to have an adequate period of sleep (7–8 hours in 24 hours) or who have been awake longer than the conventional 16–17 hours will suffer sleep deprivation. A sleep deficit accumulates with successive sleep-deprived days, and additional fatigue may be caused by breaking daily sleep into two shorter periods in place of a single unbroken period of sleep. A sleep deficit is not instantly reduced by one night's sleep; it may take two or three conventional sleep cycles for an individual to return to unimpaired performance.
Industrial or "time-on-task" fatigue describes fatigue that is accumulated during the working period, and affects performance at different times during the shift. Performance declines the longer a person is engaged in a task, gradually during the first few hours and more steeply toward the end of a long period at work. Reduced performance has also been observed in the first hour of work as an individual adjusts to the working environment.
In addition to the primary factors identified by the FAA, other potential contributors to fatigue during transportation have been identified. These include endogenous factors such as mental stress and age of the vehicle operator, as well as exogenous or environmental stressors, such as the presence of non sea-level cabin pressure in-flight, vehicle noise, and vehicle vibration/acceleration (which contributes to the sopite syndrome). Many of the exogenous contributors merit further study because they are present during transportation operations but not in most lab studies of fatigue.
In aviation
The International Civil Aviation Organization (ICAO) that codifies standards and regulations for international air-navigation defines fatigue as: "A physiological state of reduced mental or physical performance capability resulting from sleep loss or extended wakefulness, circadian phase, or workload (mental and/or physical activity) that can impair a crew member's alertness and ability to safely operate an aircraft or perform safety related duties."
Human factors are the primary causal factor aviation accidents. In 1999, the National Aeronautics and Space Administration, NASA, testified before the U.S. House of Representatives that pilot fatigue impacts aviation safety with "unknown magnitude". The report cited evidence of fatigue issues in areas including aviation operations, laboratory studies, high-fidelity simulations, and surveys. The report indicates that studies consistently show that fatigue is an ongoing problem in aviation safety. In 2009, Aerospace Medical Association listed long duty work hours, insufficient sleep, and circadian disruptions as few of the largest contributing factors to pilot fatigue. Fatigue can result in pilot error, slowed responses, missed opportunities, and incorrect responses to emergency situations.
A November 2007 report by the National Transportation Safety Board indicates that air crew fatigue is a much larger, and more widespread, problem than previously reported. The report indicates that since 1993 there have been 10 major airline crashes caused by aircrew fatigue, resulting in 260 fatalities. Additionally, a voluntary anonymous reporting system known as ASAP, Aviation Safety Action Program, reveals widespread concern among aviation professionals about the safety implications of fatigue. The NTSB published that FAA's response to fatigue is unacceptable and listed the issue among its "Most Wanted" safety issues.
Safety experts estimate that pilot fatigue contributes to 15-20% of fatal aviation accidents caused by human error. They also establish that probability of a human factor accident increases with the time pilots are on duty, especially for duty periods of 13 hours and above (see following statements):
"It is estimated (e.g. by the NTSB) that fatigue contributes to 20-30% of transport accidents (i.e. air, sea, road, rail). Since, in commercial aviation operations, about 70% of fatal accidents are related to human error, it can be assumed that the risk of the fatigue of the operating crew contributes about 15-20% to the overall accident rate. The same view of fatigue as a major risk factor is shared by leading scientists in the area, as documented in several consensus statements."
"For 10-12 hours of duty time the proportion of accident pilots with this length of duty period is 1.7 times as large as for all pilots. For pilots with 13 or more hours of duty, the proportion of accident pilot duty periods is over five and a half times as high. [...] 20% of human factor accidents occurred to pilots who had been on duty for 10 or more hours, but only 10% of pilot duty hours occurred during that time. Similarly, 5% of human factor accidents occurred to pilots who had been on duty for 13 or more hours, where only 1% of pilot duty hours occur during that time. There is a discernible pattern of increased probability of an accident the greater the hours of duty time for pilots.".
Among drivers
Many countries regulate working hours for truck drivers to reduce accidents caused by driver fatigue. The number of hours spent driving has a strong correlation to the number of fatigue-related accidents. According to numerous studies, the risk of fatigue is greatest between the hours of midnight and six in the morning, and increases with the total length of the driver's trip.
Among healthcare providers
Fatigue among doctors is a recognized problem. It can impair performance, causing harm to patients. A study using anonymous surveys completed by junior doctors in New Zealand found that 30% of respondents scored as "excessively sleepy" on the Epworth Sleepiness Scale and 42% could recall a fatigue-related clinical error in the past six months.
In the US, shift length is limited for nurses by federal regulation and some state laws.
On ships
Fatigue on board is still a major factor of accidents which lead to casualties, damage and pollution. Studies show that most accidents happen during the night peaking around 4 AM, due to the Circadian rhythm of humans. Studies like Project Horizon have recently been done to analyse which factors cause this fatigue. The lack of sleep and quality of the sleep are two of the main issues. The lack of sleep is due to the long hours that the workers (especially the officers) have to do (work weeks of 70 hours +). Quality of their sleep may be affected by a variety of factors: quality of the food on board, the vibrations due to the engine and waves, the noise of repair or works or engine, only naps (not sleeping eight hours in a single run but two or three naps a day) because of the watch system and secondary jobs. Stress on board especially when arriving in port when all hands have to be on deck whatever the time.
With companies trying to reduce the costs there is less crew. Turn around in port have to be as fast as possible because the time spent in ports is very expensive. All of this adds work and stress to the crew on board which drains them of their energy which will lead to errors due to fatigue. The ILO have conventions for trying to restrict the maximum working hours on board and to determine the minimum rest period of seafarers. As the maritime industry is highly competitive, and there are fewer and fewer crew members on board, it makes it difficult to avoid working overtime.
See also
Artificial Passenger
Fatigue Avoidance Scheduling Tool
Human reliability
Occupational safety and health
Sleep-deprived driving
References
Transport safety | Effects of fatigue on safety | [
"Physics"
] | 2,055 | [
"Physical systems",
"Transport",
"Transport safety"
] |
20,262,149 | https://en.wikipedia.org/wiki/Trinomial%20tree | The trinomial tree is a lattice-based computational model used in financial mathematics to price options. It was developed by Phelim Boyle in 1986. It is an extension of the binomial options pricing model, and is conceptually similar. It can also be shown that the approach is equivalent to the explicit finite difference method for option pricing. For fixed income and interest rate derivatives see Lattice model (finance)#Interest rate derivatives.
Formula
Under the trinomial method, the underlying stock price is modeled as a recombining tree, where, at each node the price has three possible paths: an up, down and stable or middle path. These values are found by multiplying the value at the current node by the appropriate factor , or where
(the structure is recombining)
and the corresponding probabilities are:
.
In the above formulae: is the length of time per step in the tree and is simply time to maturity divided by the number of time steps; is the risk-free interest rate over this maturity; is the corresponding volatility of the underlying; is its corresponding dividend yield.
As with the binomial model, these factors and probabilities are specified so as to ensure that the price of the underlying evolves as a martingale, while the moments considering node spacing and probabilities are matched to those of the log-normal distribution (and with increasing accuracy for smaller time-steps). Note that for , , and to be in the interval the following condition on has to be satisfied .
Once the tree of prices has been calculated, the option price is found at each node largely as for the binomial model, by working backwards from the final nodes to the present node (). The difference being that the option value at each non-final node is determined based on the threeas opposed to two later nodes and their corresponding probabilities.
If the length of time-steps is taken as an exponentially distributed random variable and interpreted as the waiting time between two movements of the stock price then the resulting stochastic process is a birth–death process. The resulting model is soluble and there exist analytic pricing and hedging formulae for various options.
Application
The trinomial model is considered to produce more accurate results than the binomial model when fewer time steps are modelled, and is therefore used when computational speed or resources may be an issue. For vanilla options, as the number of steps increases, the results rapidly converge, and the binomial model is then preferred due to its simpler implementation. For exotic options the trinomial model (or adaptations) is sometimes more stable and accurate, regardless of step-size.
See also
Binomial options pricing model
Valuation of options
Option: Model implementation
Korn–Kreer–Lenssen model
Implied trinomial tree
References
External links
Phelim Boyle, 1986. "Option Valuation Using a Three-Jump Process", International Options Journal 3, 7–12.
Paul Clifford et al. 2010. Pricing Options Using Trinomial Trees, University of Warwick
Tero Haahtela, 2010. "Recombining Trinomial Tree for Real Option Valuation with Changing Volatility", Aalto University, Working Paper Series.
Ralf Korn, Markus Kreer and Mark Lenssen, 1998. "Pricing of european options when the underlying stock price follows a linear birth-death process", Stochastic Models Vol. 14(3), pp 647 – 662
Peter Hoadley. Trinomial Tree Option Calculator (Tree Visualized)
Mathematical finance
Options (finance)
Models of computation
Trees (data structures)
Financial models | Trinomial tree | [
"Mathematics"
] | 758 | [
"Applied mathematics",
"Mathematical finance"
] |
34,400,236 | https://en.wikipedia.org/wiki/Aeroballistic%20Range%20Association | The Aeroballistic Range Association (ARA) is a nonprofit organization for facilities engaged in research in ballistics and the developing of guns and related launchers.
Purpose
The organization was formed to share information about facility design, instrumentation development and range operations. The organization holds a yearly meeting, usually at a location near an active test facility. Members are encouraged to present at least one technical paper per meeting.
Student outreach
The ARA started a outreach program that intends to promote ballistic research in the early stages of career development. They established a student research paper contest that provided cash stipends at around $1,000 for winners, as well as invitations to ARA meetings to present their papers.
See also
Ballistics
AEDC Range G
AEDC Ballistic Range S-3
Ames Research Center
References
External links
Aeroballistic Range Association (official)
Trade associations based in the United States
Ballistics
Organizations established in 1961
Non-profit organizations based in the United States | Aeroballistic Range Association | [
"Physics"
] | 188 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
34,406,020 | https://en.wikipedia.org/wiki/SAV001 | SAV001-H is the first candidate preventive HIV vaccine using a killed or "dead" version of the HIV-1 virus (inactivated vaccine).
The vaccine was developed by Dr. Chil-Yong Kang and his research team at Western University’s Schulich School of Medicine & Dentistry in Canada.
The results of the Phase I clinical trial, completed in August 2013, showed no serious adverse effects in 33 participants.
Vaccine design
The SAV001-H vaccine is considered to be the first whole killed genetically modified HIV-1 vaccine.
According to Dr. Kang, the HIV-1 strain was genetically engineered such that first, “the gene responsible for pathogenicity, known as nef” is removed to make it non-pathogenic. Then, the signal peptide gene is replaced with a honey bee toxin (melittin) signal peptide to make the virus production much higher and faster. In the signal peptide exchange process, another gene called vpu is lost due to an overlapping. Finally, this genetically modified version of HIV-1, (i.e., HIV-1 virus with nef negative, vpu negative and signal peptide gene replaced with those of a honey bee) is grown in human T-lymphocytes (A3.01 cell line), collected, purified and inactivated by AT-2 (aldrithiol-2 or 2,2'-Dipyridyldisulfide) chemical treatment and gamma irradiation. AT-2 chemical treatment is used because it does not affect the viral structure and immunogens.
The killed virus vaccine approach successfully prevents polio, influenza, cholera, mumps, rabies, typhoid fever and hepatitis A. At the moment, there are also 16 animal vaccines using the killed virus design. A vaccine against feline immunodeficiency virus (a virus related to HIV which infects cats) used the kills virus design - the vaccine was discontinued from production for multiple reasons, including commercial non-viability, protection not covering all FIV strains, and concerns over sarcoma at the injection site caused by adjuvants.
Clinical trials
Phase I clinical trial (NCT01546818) in HIV-infected individuals
Funded by Sumagen Canada, the government of Canada and the Bill and Melinda Gates Foundation, it started in March 2012 to assess its safety, tolerability, and immune responses. This was a randomized, double-blind, placebo-controlled trial, administering vaccine intramuscularly to 33 chronic HIV-1 infected individuals being treated with HAART.
The trial was completed in August 2013. It reported no serious adverse effects. The vaccine induced antibodies in participants. Antibodies against gp120 surface antigen and P24 capsid antigen increased up to 6-fold and 64-fold, respectively, and the increased level of antibody was continued throughout the 52-week study period. Broadly neutralizing antibodies were found in some blood samples of the participants.
Phase II clinical trial
The Phase II clinical trial was expected to begin in 2018 in the United States to measure immune responses. The researchers planned to recruit about 600 HIV-negative volunteers who are in the high risk category for HIV infections such as commercial sex workers, men who have sex with men (MSM), injecting drug users, and people who have unsafe sex with multiple partners.
Therapeutic HIV vaccine status
Dr. Kang has also developed a therapeutic HIV vaccine employing recombinant vesicular stomatitis viruses carrying of HIV-1gag, pol and/or env genes. Researchers reported that the therapeutic vaccine induced robust cellular immune responses in animal tests recently conducted.
History of killed HIV vaccine
Although the whole killed virus vaccine strategy is successfully used worldwide to prevent diseases like polio, influenza, cholera, mumps, rabies, typhoid fever and hepatitis A, it did not receive serious attention in HIV vaccine development, for scientific, economic and technical reasons. First, there are risks associated with inadequately inactivated or not killed HIV remaining in vaccines. Second, massive production of HIV is not economically feasible, if not impossible. Third, many researchers believe that inactivating/killing HIV by chemical treatment also removes its antigenicity, so that it fails to induce both neutralizing antibodies and cytotoxic T-lymphocyte or CD8+ T cells (CTL). Fourth, early studies with monkeys using the killed simian immunodeficiency virus (SIV) vaccine showed some optimism but it turned out that the protection was attributable to responses to both the cellular proteins on the SIV vaccine and on the challenge virus grown not in monkey cells but in human cells. Fifth, lab-adapted HIV-1 seemed to lose envelope glycoprotein, gp120, during preparation.
Nonetheless, many scientists and researchers believe that the whole killed virus vaccine strategy is a feasible option for an HIV vaccine. Jonas Salk had developed a therapeutic whole killed HIV vaccine in 1987, called Remune, which is being developed by Immune Response BioPharma, Inc., Remune vaccine completed over 25 clinical studies and showed a robust mechanism of action, restoring white blood cell counts in CD4 and CD8 T cells by reducing viral load and increasing immunity.
Developer and organizers
The developer of SAV001-H, Dr. Chil-yong Kang, is a professor of Virology in the Department of Microbiology and Immunology, Schulich School of Medicine & Dentistry at the University of Western Ontario since 1992. In addition to HIV preventive and therapeutic vaccine candidates, Dr. Kang is developing a second generation vaccine against hepatitis B and hepatitis C virus.
The patents related to the SAV001 vaccine were registered in more than 70 countries, including the U.S., the European Union, China, India, and South Korea.
References
External links
Clinical Trial Site for SAV001-H
Dr. Chil-yong Kang's Lab
Killed HIV Vaccine Advocate
Phase I Trial Details
IAVI Report: Whole Killed AIDS Vaccines
US Patent: HIV COMBINATION VACCINE AND PRIME BOOST
Sumagen Canada Homepage
Curocom Homepage
Schulich School of Medicine & Dentistry
HIV vaccine research | SAV001 | [
"Chemistry"
] | 1,273 | [
"HIV vaccine research",
"Drug discovery"
] |
34,407,360 | https://en.wikipedia.org/wiki/Kovac%27s%20reagent | Kovács reagent is a biochemical reagent consisting of isoamyl alcohol, para-dimethylaminobenzaldehyde (DMAB), and concentrated hydrochloric acid. It is used for the diagnostical indole test, to determine the ability of the organism to split indole from the amino acid tryptophan. The indole produced yields a red complex with para-dimethylaminobenzaldehyde under the given conditions. This was invented by the Hungarian physician Nicholas Kovács and was published in 1928. This reagent is used in the confirmation of E. coli and many other pathogenic microorganisms.
See also
Ehrlich's reagent is similar but uses ethyl alcohol or 1-propyl alcohol.
References
2. Kovacs, N. (1928): Eine vereinfachte Methode zum Nachweis der Indolbildung durch Bakterien. Zeitschrift für Immunitätsforschung und Experimentelle Therapie, 55, 311.
Chemical tests | Kovac's reagent | [
"Chemistry",
"Biology"
] | 231 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry",
"Chemical tests"
] |
34,409,273 | https://en.wikipedia.org/wiki/Sum%20activity%20of%20peripheral%20deiodinases | The sum activity of peripheral deiodinases (GD, also referred to as deiodination capacity, total deiodinase activity or, if calculated from levels of thyroid hormones, as SPINA-GD) is the maximum amount of triiodothyronine produced per time-unit under conditions of substrate saturation. It is assumed to reflect the activity of deiodinases outside the central nervous system and other isolated compartments. GD is therefore expected to reflect predominantly the activity of type I deiodinase.
How to determine GD
GD can be determined experimentally by exposing a cell culture system to saturating concentrations of T4 and measuring the T3 production. Whole body deiodination activity can be assessed by measuring production of radioactive iodine after loading the organism with marked thyroxine.
However, both approaches are faced with draw-backs. Measuring deiodination in cell culture delivers little, if any, information on total deiodination activity. Using marked thyroxine exposes the body to thyrotoxicosis and radioactivity. Additionally, it is not possible to differentiate step-up reactions resulting in T3 production from the step-down reaction catalyzed by type 3 deiodination, which mediates production of reverse T3. Distinguishing the contribution of distinct deiodinases is possible, however, by sequential approaches using deiodinase-specific blocking agents, but this approach is cumbersome and time-consuming.
In vivo, it may therefore be beneficial to estimate GD from equilibrium levels of T4 and T3. It is obtained with
or
[FT4]: Serum free T4 concentration (in pmol/L)
[FT3]: Serum free T3 concentration (in pmol/L)
[TT3]: Serum total T3 concentration (in nmol/L)
: Dilution factor for T3 (reciprocal of apparent volume of distribution, 0.026 L−1)
: Clearance exponent for T3 (8e-6 sec−1) (i. e., reaction rate constant for degradation)
KM1: Binding constant of type-1-deiodinase (5e-7 mol/L)
K30: Binding constant T3-TBG (2e9 L/mol)
The method is based on mathematical models of thyroid homeostasis. Calculating deiodinase activity with one of these equations is an inverse problem. Therefore, certain conditions (e.g. stationarity) have to be fulfilled to deliver a reliable result.
The product of SPINA-GD times the urinary iodine excretion can be used to assess iodine-independent factors affecting deiodinase activity, e.g. selenium deficiency.
Reference range
The equations and their parameters are calibrated for adult humans with a body mass of 70 kg and a plasma volume of ca. 2.5 L.
Clinical significance
Validity
SPINA-GD correlates to the T4-T3 conversion rate in slow tissue pools, as determined with isotope-based measurements in healthy volunteers. It was also shown that GD correlates with resting energy expenditure, body mass index and thyrotropin levels in humans, and that it is reduced in nonthyroidal illness with hypodeiodination. Multiple studies demonstrated SPINA-GD to rise after initiation of substitution therapy with selenium, a trace element that is essential for the synthesis of deiodinases. Conversely, it was observed that SPINA-GD is reduced in persons positive for autoantibodies to selenoprotein P, which is assumed to be involved in transport and storage of selenium.
Clinical utility
Compared to both healthy volunteers and subjects with hypothyroidism and hyperthyroidism, SPINA-GD is reduced in subacute thyroiditis. In this condition, it has a higher specificity, positive and negative likelihood ratio than serum concentrations of thyrotropin, free T4 or free T3. These measures of diagnostic utility are also high in nodular goitre, where SPINA-GD is elevated. Among subjects with subclinical thyrotoxicosis, calculated deiodinase activity is significantly lower in exogenous thyrotoxicosis (resulting from therapy with levothyroxine) than in true hyperthyroidism (ensuing from toxic adenoma, toxic multinodular goitre or Graves' disease). SPINA-GD may therefore be an effective biomarker for the differential diagnosis of thyrotoxicosis.
Compared to healthy subjects, SPINA-GD is significantly reduced in euthyroid sick syndrome.
Pathophysiological and therapeutic implications
Recent research revealed total deiodinase activity to be higher in untreated hypothyroid patients as long as thyroid tissue is still present. This effect may ensue from the existence of an effective TSH-deiodinase axis or TSH-T3 shunt. After total thyroidectomy or high-dose radioiodine therapy (e.g. in treated thyroid cancer) as well as after initiation of substitution therapy with levothyroxine the activity of step-up deiodinases decreases and the correlation of SPINA-GD to thyrotropin concentration is lost. In patients suffering from toxic adenoma, toxic multinodular goitre and Graves’ disease low-dose radioiodine therapy leads to a significant reduction of SPINA-GD as well.
SPINA-GD is elevated in obesity. This applies to both the metabolically healthy obese (MHO) or metabolically unhealthy obese (MUO) phenotypes. In two large population-based cohorts within the Study of Health in Pomerania SPINA-GD was positively correlated to some markers of body composition including body mass index (BMI), waist circumference, fat-free mass and body cell mass, confirming observations in the NHANES dataset and in a Chinese study. This positive association was age-dependent and with respect to BMI significant in young subjects only, but with respect to body cell mass stronger in elderly persons. Generally, SPINA-GD seems to be upregulated in metabolic syndrome, as demonstrated by a significant correlation to the triglyceride-glucose index, a marker of insulin resistance.
SPINA-GD is reduced in low-T3 syndrome and certain chronic diseases, e.g. chronic fatigue syndrome, chronic kidney disease, short bowel syndrome or geriatric asthma. Six months after the primary infection, it correlates negatively to the FS-14 score for fatigue in patients affected by Long COVID (PASC). In Graves' disease, SPINA-GD is initially elevated but decreases with antithyroid treatment in parallel to declining TSH receptor autoantibody titres. Although takotsubo syndrome (TTS) results in most cases from psychosocial stressors, thereby reflecting type 2 allostatic load, SPINA-GD has been described to be reduced in TTS. This may result from concomitant non-thyroidal illness syndrome, so that the clinical phenotype represents overlapping type 1 and type 2 allostatic response. In a large register-based study, reduced SPINA-GD predicted a poor outcome of Takotsubo syndrome.
In certain psychiatric diseases, including major depression, bipolar disorder and schizophrenia SPINA-GD is reduced compared to healthy controls. This observation is supported by negative correlation of SPINA-GD with the depression percentiles in the Hospital Anxiety and Depression Scale (HADS).
In hyperthyroid men both SPINA-GT and SPINA-GD negatively correlate to erectile function, intercourse satisfaction, orgasmic function and sexual desire. Substitution with selenomethionine results in increased SPINA-GD in subjects with autoimmune thyroiditis.
In subjects with diabetes mellitus SPINA-GD is positively correlated to several bone resorption markers including the N-mid fragment of osteocalcin and procollagen type I N-terminal propeptide (P1NP), as well as, however in men only, the β-C-terminal cross-linked telopeptides of type I collagen (β-CTX). In the general population it is, however, positively associated with the bone mineral density of the femoral neck and with reduced risk of osteoporosis. In both diabetic and non-diabetic subsjects it correlates (negatively) with age and concentrations of c-reactive protein, troponin T and B-type natriuretic peptide, and (positively) with the concentrations of total cholesterol, low-density lipoprotein and triglycerides.
Deiodination capacity proved to be an independent predictor of substitution dose in several trials that included persons on replacement therapy with levothyroxine.
Probably as a consequence of non-thyroidal illness syndrome, SPINA-GD predicts mortality in trauma and postoperative atrial fibrillation in patients undergoing cardiac surgery. The association to mortality is retained even after adjustment for other established risk factors, including age, APACHE II score and plasma protein binding of thyroid hormones. Correlations were also shown to age, total atrial conduction time, and concentrations of 3,5-diiodothyronine and B-type natriuretic peptide. SPINA-GD also correlates with several components of the kynurenine pathway, which might mirror an assosication to a pro-inflammatory milieu. Accordingly, in a population suffering from pyogenic liver abscess SPINA-GD correlated to markers of malnutrition, inflammation and liver failure. A study on subjects with Parkinson's disease found SPINA-GD to be significantly decreased in tremor-dominant and mixed subtypes compared to the akinetic-rigid type. Euthyroid sick syndrome may be the reason for variations of SPINA-GD in subjects treated with immune checkpoint inhibitors for cancer as well.
Endocrine disruptors may have pronounced effects on step-up deiodinases, as suggested by positive correlation of SPINA-GD to combined exposure to polycyclic aromatic hydrocarbons (PAHs) and urine concentrations of cadmium and phthalate metabolites, negative correlation to paraben, mercury and bisphenol A concentration and a nonlinear association to the concentrations of per- and polyfluoroalkyl substances. In a cohort of manganese-exposed workers, SPINA-GD responded to a tenfold increase in concentrations of titanium, nickel, selenium and strontium.
In a longitudinal evaluation of a large sample of the general US population over 10 years, reduced SPINA-GD significantly predicted reduced overall survival.
See also
Thyroid function tests
Thyroid's secretory capacity
Jostel's TSH index
Thyrotroph Thyroid Hormone Sensitivity Index
Thyroid Feedback Quantile-based Index
SimThyr
SPINA-GBeta
SPINA-GR
Notes
References
External links
SPINA Thyr: Open source software for calculating GT and GD
Package "SPINA" for the statistical environment R
Chemical pathology
Blood tests
Endocrine procedures
Thyroidological methods
Thyroid homeostasis
Structure parameters of thyroid function
Static endocrine function tests | Sum activity of peripheral deiodinases | [
"Chemistry",
"Biology"
] | 2,377 | [
"Biochemistry",
"Blood tests",
"Chemical pathology",
"Structure parameters of thyroid function"
] |
3,771,707 | https://en.wikipedia.org/wiki/Nature%20Materials | Nature Materials is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was launched in September 2002. Vincent Dusastre is the launching and current chief editor.
Aims and scope
Nature Materials is focused on all topics within the combined disciplines of materials science and engineering. Topics published in the journal are presented from the view of the impact that materials research has on other scientific disciplines such as (for example) physics, chemistry, and biology. Coverage in this journal encompasses fundamental research and applications from synthesis to processing, and from structure to composition. Coverage also includes basic research and applications of properties and performance of materials. Materials are specifically described as "substances in the condensed states (liquid, solid, colloidal)", and which are "designed or manipulated for technological ends."
Furthermore, Nature Materials functions as a forum for the materials scientist community. Interdisciplinary research results are published, obtained from across all areas of materials research, and between scientists involved in the different disciplines. The readership for this journal are scientists, in both academia and industry involved in either developing materials or working with materials-related concepts. Finally, Nature Materials perceives materials research as significantly influential on the development of society.
Coverage
Research areas covered in the journal include:
Engineering and structural materials (metals, alloys, ceramics, composites)
Organic and soft materials (glasses, colloids, liquid crystals, polymers)
Bio-inspired, biomedical and biomolecular materials
Optical, photonic and optoelectronic materials
Magnetic materials
Materials for electronics
Superconducting materials
Catalytic and separation materials
Materials for energy
Nanoscale materials and processes
Computation, modelling and materials theory
Surfaces and thin films
Design, synthesis, processing and characterization techniques
In addition to primary research, Nature Materials also publishes review articles, news and views, research highlights about important papers published in other journals, commentaries, correspondence, interviews and analysis of the broad field of materials science.
Abstracting and indexing
Nature Materials is indexed in the following databases:
Chemical Abstracts Service – CASSI
Science Citation Index
Science Citation Index Expanded
Current Contents – Physical, Chemical & Earth Sciences
BIOSIS Previews
References
External links
Nature Materials
Nature Materials editors
Nature Research academic journals
Materials science journals
Monthly journals
English-language journals
Academic journals established in 2002 | Nature Materials | [
"Materials_science",
"Engineering"
] | 451 | [
"Materials science journals",
"Materials science"
] |
3,771,715 | https://en.wikipedia.org/wiki/Stacking-fault%20energy | The stacking-fault energy (SFE) is a materials property on a very small scale. It is noted as γSFE in units of energy per area.
A stacking fault is an interruption of the normal stacking sequence of atomic planes in a close-packed crystal structure. These interruptions carry a certain stacking-fault energy. The width of stacking fault is a consequence of the balance between the repulsive force between two partial dislocations on one hand and the attractive force due to the surface tension of the stacking fault on the other hand. The equilibrium width is thus partially determined by the stacking-fault energy. When the SFE is high the dissociation of a full dislocation into two partials is energetically unfavorable, and the material can deform either by dislocation glide or cross-slip. Lower SFE materials display wider stacking faults and have more difficulties for cross-slip.
The SFE modifies the ability of a dislocation in a crystal to glide onto an intersecting slip plane.
Stacking faults and stacking fault energy
A stacking fault is an irregularity in the planar stacking sequence of atoms in a crystal – in FCC metals the normal stacking sequence is ABCABC etc., but if a stacking fault is introduced it may introduce an irregularity such as ABCBCABC into the normal stacking sequence. These irregularities carry a certain energy which is called the stacking-fault energy.
Influences on stacking fault energy
Stacking fault energy is heavily influenced by a few major factors, specifically base metal, alloying metals, percent of alloy metals, and valence-electron to atom ratio.
Alloying elements effects on SFE
It has long been established that the addition of alloying elements significantly lowers the SFE of most metals. Which element and how much is added dramatically affects the SFE of a material. The figures on the right show how the SFE of copper lowers with the addition of two different alloying elements; zinc and aluminum. In both cases, the SFE of the brass decreases with increasing alloy content. However, the SFE of the Cu-Al alloy decreases faster and reaches a lower minimum.
e/a ratio
Another factor that has a significant effect on the SFE of a material and is very interrelated with alloy content is the e/a ratio, or the ratio of valence electrons to atoms. Thornton showed this in 1962 by plotting the e/a ratio vs SFE for a few Cu based alloys. He found that the valence-electron to atom ratio is a good predictor of stacking fault energy, even when the alloying element is changed. This directly supports the graphs on the right. Zinc is a heavier element and only has two valence electrons, whereas aluminum is lighter and has three valence electrons. Thus each weight percent of aluminum has a much greater impact on the SFE of the Cu-based alloy than does zinc.
Effects of stacking fault energy on deformation and texture
The two primary methods of deformation in metals are slip and twinning. Slip occurs by dislocation glide of either screw or edge dislocations within a slip plane. Slip is by far the most common mechanism. Twinning is less common but readily occurs under some circumstances.
Twinning occurs when there are not enough slip systems to accommodate deformation and/or when the material has a very low SFE. Twins are abundant in many low SFE metals like copper alloys, but are rarely seen in high SFE metals like aluminum.
In order to accommodate large strains without fracturing, there must be at least five independent and active slip systems. When cross-slip frequently occurs and certain other criteria are met, sometimes only three independent slip systems are needed for accommodating large deformations.
Because of the different deformation mechanisms in high and low SFE materials, they develop different textures.
High SFE materials
High SFE materials deform by glide of full dislocations. Because there are no stacking faults, the screw dislocations may cross-slip. Smallman found that cross-slip happens under low stress for high SFE materials like aluminum (1964). This gives a metal extra ductility because with cross-slip it needs only three other active slip systems to undergo large strains. This is true even when the crystal is not ideally oriented.
High SFE materials therefore do not need to change orientation in order to accommodate large deformations because of cross-slip. Some reorientation and texture development will occur as the grains move during deformation. Extensive cross-slip due to large deformation also causes some grain rotation. However, this re-orientation of grains in high SFE materials is much less prevalent than in low SFE materials.
Low SFE materials
Low SFE materials twin and create partial dislocations. Partials form instead of screw dislocations. Screws which do exist cannot cross-slip across stacking faults, even under high stresses. Five or more slip systems must be active for large deformations to occur because of the absence of cross-slip. For both the <111> and <100> directions there are six and eight different slip systems, respectively. If loading is not applied near one of those directions, five slip systems might be active. In this case, other mechanisms must also be in place to accommodate large strains.
Low SFE materials also twin when strained. If deformation twinning is combined with regular shear deformation, the grains eventually align towards a more preferred orientation. When many different grains align a highly anisotropic texture is created.
Notes
Materials science | Stacking-fault energy | [
"Physics",
"Materials_science",
"Engineering"
] | 1,147 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
3,774,763 | https://en.wikipedia.org/wiki/Michel%20parameters | The Michel parameters, usually denoted by and , are four parameters used in describing the phase space distribution of leptonic decays of charged leptons, . They are named after the physicist Louis Michel. Sometimes instead of , the product is quoted. Within the Standard Model of electroweak interactions, these parameters are expected to be
Precise measurements of energy and angular distributions of the daughter leptons in decays of polarized muons and tau leptons are so far in good agreement with these predictions of the Standard Model.
Muon decay
Consider the decay of the positive muon:
In the muon rest frame, energy and angular distributions of the positrons emitted in the decay of a polarised muon expressed in terms of Michel parameters are the following, neglecting electron and neutrino masses and the radiative corrections:
where is muon polarisation, , and is the angle between muon spin direction and positron momentum direction. For the decay of the negative muon, the sign of the term containing should be inverted.
For the decay of the positive muon, the expected decay distribution for the Standard Model values of Michel parameters is
Integration of this expression over electron energy gives the angular distribution of the daughter positrons:
The positron energy distribution integrated over the polar angle is
References
Lecture on Lepton Universality by Michel Davier at the 1997 SLAC Summer Institute.
Electroweak Couplings, Lepton Universality, and the Origin of Mass: An Experimental Perspective, article by John Swain, from the Proceedings of the Third Latin American Symposium on High Energy Physics.
Electroweak theory | Michel parameters | [
"Physics"
] | 330 | [
"Physical phenomena",
"Electroweak theory",
"Fundamental interactions",
"Particle physics",
"Particle physics stubs"
] |
3,775,248 | https://en.wikipedia.org/wiki/Nychthemeron | Nychthemeron , occasionally nycthemeron or nuchthemeron, is a period of 24 consecutive hours. It is sometimes used, especially in technical literature, to avoid the ambiguity inherent in the term day.
It is the period of time that a calendar normally labels with a date, although a nychthemeron simply designates a time-span that can start at any time, not just midnight.
Etymology
It is a loanword from Ancient Greek (), which appears in the New Testament. This is a noun use of the neuter singular form of , from (, “night”) + (, “day”).
In other languages
Some languages have a word for 24 hours, or more loosely a day plus a night in no particular order. Unlike a calendar date, only the length is defined, with no particular start or end. Furthermore, these words are considered basic and native to these languages, so unlike nychthemeron they are not associated with jargon.
Words for 24 hours are listed in the middle column. For comparison, the word for day, in the meaning of daytime, the sunlit state, the opposite of night, is also listed in the rightmost column:
The word , as in the Nordic languages, is etymologically the same as day in English.
References
Units of time
Calendars | Nychthemeron | [
"Physics",
"Mathematics"
] | 273 | [
"Calendars",
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
3,776,351 | https://en.wikipedia.org/wiki/Factorization%20of%20polynomials | In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.
The first polynomial factorization algorithm was published by Theodor von Schubert in 1793. Leopold Kronecker rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension. But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems:
When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982)
Modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the rational numbers and number fields, a fundamental step is a factorization of a polynomial over a finite field.
Formulation of the question
Polynomial rings over the integers or over a field are unique factorization domains. This means that every element of these rings is a product of a constant and a product of irreducible polynomials (those that are not the product of two non-constant polynomials). Moreover, this decomposition is unique up to multiplication of the factors by invertible constants.
Factorization depends on the base field. For example, the fundamental theorem of algebra, which states that every polynomial with complex coefficients has complex roots, implies that a polynomial with integer coefficients can be factored (with root-finding algorithms) into linear factors over the complex field C. Similarly, over the field of reals, the irreducible factors have degree at most two, while there are polynomials of any degree that are irreducible over the field of rationals Q.
The question of polynomial factorization makes sense only for coefficients in a computable field whose every element may be represented in a computer and for which there are algorithms for the arithmetic operations. However, this is not a sufficient condition: Fröhlich and Shepherdson give examples of such fields for which no factorization algorithm can exist.
The fields of coefficients for which factorization algorithms are known include prime fields (that is, the field of the rational number and the fields of the integers modulo a prime number) and their finitely generated field extensions. Integer coefficients are also tractable. Kronecker's classical method is interesting only from a historical point of view; modern algorithms proceed by a succession of:
Square-free factorization
Factorization over finite fields
and reductions:
From the multivariate case to the univariate case.
From coefficients in a purely transcendental extension to the multivariate case over the ground field (see below).
From coefficients in an algebraic extension to coefficients in the ground field (see below).
From rational coefficients to integer coefficients (see below).
From integer coefficients to coefficients in a prime field with p elements, for a well chosen p (see below).
Primitive part–content factorization
In this section, we show that factoring over Q (the rational numbers) and over Z (the integers) is essentially the same problem.
The content of a polynomial p ∈ Z[X], denoted "cont(p)", is, up to its sign, the greatest common divisor of its coefficients. The primitive part of p is primpart(p) = p/cont(p), which is a primitive polynomial with integer coefficients. This defines a factorization of p into the product of an integer and a primitive polynomial. This factorization is unique up to the sign of the content. It is a usual convention to choose the sign of the content such that the leading coefficient of the primitive part is positive.
For example,
is a factorization into content and primitive part.
Every polynomial q with rational coefficients may be written
where p ∈ Z[X] and c ∈ Z: it suffices to take for c a multiple of all denominators of the coefficients of q (for example their product) and p = cq. The content of q is defined as:
and the primitive part of q is that of p. As for the polynomials with integer coefficients, this defines a factorization into a rational number and a primitive polynomial with integer coefficients. This factorization is also unique up to the choice of a sign.
For example,
is a factorization into content and primitive part.
Gauss proved that the product of two primitive polynomials is also primitive (Gauss's lemma). This implies that a primitive polynomial is irreducible over the rationals if and only if it is irreducible over the integers. This implies also that the factorization over the rationals of a polynomial with rational coefficients is the same as the factorization over the integers of its primitive part. Similarly, the factorization over the integers of a polynomial with integer coefficients is the product of the factorization of its primitive part by the factorization of its content.
In other words, an integer GCD computation reduces the factorization of a polynomial over the rationals to the factorization of a primitive polynomial with integer coefficients, and the factorization over the integers to the factorization of an integer and a primitive polynomial.
Everything that precedes remains true if Z is replaced by a polynomial ring over a field F and Q is replaced by a field of rational functions over F in the same variables, with the only difference that "up to a sign" must be replaced by "up to the multiplication by an invertible constant in F". This reduces the factorization over a purely transcendental field extension of F to the factorization of multivariate polynomials over F.
Square-free factorization
If two or more factors of a polynomial are identical, then the polynomial is a multiple of the square of this factor. The multiple factor is also a factor of the polynomial's derivative (with respect to any of the variables, if several).
For univariate polynomials, multiple factors are equivalent to multiple roots (over a suitable extension field). For univariate polynomials over the rationals (or more generally over a field of characteristic zero), Yun's algorithm exploits this to efficiently factorize the polynomial into square-free factors, that is, factors that are not a multiple of a square, performing a sequence of GCD computations starting with gcd(f(x), f '(x)). To factorize the initial polynomial, it suffices to factorize each square-free factor. Square-free factorization is therefore the first step in most polynomial factorization algorithms.
Yun's algorithm extends this to the multivariate case by considering a multivariate polynomial as a univariate polynomial over a polynomial ring.
In the case of a polynomial over a finite field, Yun's algorithm applies only if the degree is smaller than the characteristic, because, otherwise, the derivative of a non-zero polynomial may be zero (over the field with p elements, the derivative of a polynomial in xp is always zero). Nevertheless, a succession of GCD computations, starting from the polynomial and its derivative, allows one to compute the square-free decomposition; see Polynomial factorization over finite fields#Square-free factorization.
Classical methods
This section describes textbook methods that can be convenient when computing by hand. These methods are not used for computer computations because they use integer factorization, which is currently slower than polynomial factorization.
The two methods that follow start from a univariate polynomial with integer coefficients for finding factors that are also polynomials with integer coefficients.
Obtaining linear factors
All linear factors with rational coefficients can be found using the rational root test. If the polynomial to be factored is , then all possible linear factors are of the form , where is an integer factor of and is an integer factor of . All possible combinations of integer factors can be tested for validity, and each valid one can be factored out using polynomial long division. If the original polynomial is the product of factors at least two of which are of degree 2 or higher, this technique only provides a partial factorization; otherwise the factorization is complete. In particular, if there is exactly one non-linear factor, it will be the polynomial left after all linear factors have been factorized out. In the case of a cubic polynomial, if the cubic is factorizable at all, the rational root test gives a complete factorization, either into a linear factor and an irreducible quadratic factor, or into three linear factors.
Kronecker's method
Kronecker's method is aimed to factor univariate polynomials with integer coefficients into polynomials with integer coefficients.
The method uses the fact that evaluating integer polynomials at integer values must produce integers. That is, if is a polynomial with integer coefficients, then is an integer as soon as is an integer. There are only a finite number of possible integer values for a factor of . So, if is a factor of the value of must be one of the factors of
If one searches for all factors of a given degree , one can consider values, for , which give a finite number of possibilities for the tuple Each has a finite number of divisors , and, each -tuple where the entry is a divisor of , that is, a tuple of the form , produces a unique polynomial of degree at most , which can be computed by polynomial interpolation. Each of these polynomials can be tested for being a factor by polynomial division. Since there were finitely many and each has finitely many divisors, there are finitely many such tuples. So, an exhaustive search allows finding all factors of degree at most .
For example, consider
.
If this polynomial factors over Z, then at least one of its factors must be of degree two or less, so is uniquely determined by three values. Thus, we compute three values , and . If one of these values is 0, we have a linear factor. If the values are nonzero, we can list the possible factorizations for each. Now, 2 can only factor as
1×2, 2×1, (−1)×(−2), or (−2)×(−1).
Therefore, if a second degree integer polynomial factor exists, it must take one of the values
p(0) = 1, 2, −1, or −2
and likewise for p(1). There are eight factorizations of 6 (four each for 1×6 and 2×3), making a total of 4×4×8 = 128 possible triples (p(0), p(1), p(−1)), of which half can be discarded as the negatives of the other half. Thus, we must check 64 explicit integer polynomials as possible factors of . Testing them exhaustively reveals that
constructed from (g(0), g(1), g(−1)) = (1,3,1) factors .
Dividing f(x) by p(x) gives the other factor , so that .
Now one can test recursively to find factors of p(x) and q(x), in this case using the rational root test. It turns out they are both irreducible, so the irreducible factorization of f(x) is:
Modern methods
Factoring over finite fields
Factoring univariate polynomials over the integers
If is a univariate polynomial over the integers, assumed to be content-free and square-free, one starts by computing a bound such that any factor has coefficients of absolute value bounded by . This way, if is an integer larger than , and if is known modulo , then can be reconstructed from its image mod .
The Zassenhaus algorithm proceeds as follows. First, choose a prime number such that the image of remains square-free, and of the same degree as . Then factor . This produces integer polynomials whose product matches . Next, apply Hensel lifting; this updates the in such a way that their product matches , where is large enough that exceeds : thus each corresponds to a well-defined integer polynomial. Modulo , the polynomial has factors (up to units): the products of all subsets of . These factors modulo need not correspond to "true" factors of in , but we can easily test them by division in . This way, all irreducible true factors can be found by checking at most cases, reduced to cases by skipping complements. If is reducible, the number of cases is reduced further by removing those that appear in an already found true factor. The Zassenhaus algorithm processes each case (each subset) quickly, however, in the worst case, it considers an exponential number of cases.
The first polynomial time algorithm for factoring rational polynomials was discovered by Lenstra, Lenstra and Lovász and is an application of the Lenstra–Lenstra–Lovász lattice basis reduction (LLL) algorithm .
A simplified version of the LLL factorization algorithm is as follows: calculate a complex (or p-adic) root α of the polynomial to high precision, then use the Lenstra–Lenstra–Lovász lattice basis reduction algorithm to find an approximate linear relation between 1, α, α2, α3, . . . with integer coefficients, which might be an exact linear relation and a polynomial factor of . One can determine a bound for the precision that guarantees that this method produces either a factor, or an irreducibility proof. Although this method finishes in polynomial time, it is not used in practice because the lattice has high dimension and huge entries, which makes the computation slow.
The exponential complexity in the Zassenhaus algorithm comes from a combinatorial problem: how to select the right subsets of . State-of-the-art factoring implementations work in a manner similar to Zassenhaus, except that the combinatorial problem is translated to a lattice problem that is then solved by LLL. In this approach, LLL is not used to compute coefficients of factors, but rather to compute vectors with entries in {0,1} that encode the subsets of corresponding to the irreducible true factors.
Factoring over algebraic extensions (Trager's method)
We can factor a polynomial , where the field is a finite extension of . First, using square-free factorization, we may suppose that the polynomial is square-free. Next we define the quotient ring of degree ; this is not a field unless is irreducible, but it is a reduced ring since is square-free. Indeed, ifis the desired factorization of p(x), the ring decomposes uniquely into fields as:
We will find this decomposition without knowing the factorization. First, we write L explicitly as an algebra over : we pick a random element , which generates over with high probability by the primitive element theorem. If this is the case, we can compute the minimal polynomial of over , by finding a -linear relation among 1, α, . . . , αn. Using a factoring algorithm for rational polyomials, we factor into irreducibles in :
Thus we have:
where corresponds to . This must be isomorphic to the previous decomposition of .
The generators of L are x along with the generators of over ; writing these as a polynomials in , we can determine the embeddings of and into each component . By finding the minimal polynomial of in , we compute , and thus factor over
Numerical factorization
"Numerical factorization" refers commonly to the factorization of polynomials with real or complex coefficients, whose coefficients are only approximately known, generally because they are represented as floating point numbers.
For univariate polynomials with complex coefficients, factorization can easily be reduced to numerical computation of polynomial roots and multiplicities.
In the multivariate case, a random infinitesimal perturbation of the coefficients produces with probability one an irreducible polynomial, even when starting from a polynomial with many factors. So, the very meaning of numerical factorization needs to be clarified precisely.
Let be a polynomial with complex coefficients with an irreducible factorization
where and the factors are irreducible polynomials with complex coefficients. Assume that is approximated through a polynomial whose coefficients are close to those of . The exact factorization of is pointless, since it is generally irreducible. There are several possible definitions of what can be called a numerical factorization of
If and 's are known, an approximate factorization consists of finding a polynomial close to that factors as above. If one does not know the factorization scheme, identifying becomes necessary. For example, the number of irreducible factors of a polynomial is the nullity of its Ruppert matrix. Thus the multiplicities can be identified by square-free factorization via numerical GCD computation and rank-revealing on Ruppert matrices.
Several algorithms have been developed and implemented for numerical factorization as an on-going subject of research.
See also
, for elementary heuristic methods and explicit formulas
Swinnerton-Dyer polynomials, a family of polynomials having worst-case runtime for the Zassenhaus method
Bibliography
(accessible to readers with undergraduate mathematics)
Van der Waerden, Algebra (1970), trans. Blum and Schulenberger, Frederick Ungar.
Further reading
Polynomials
Computer algebra
Factorization
Polynomial factorization algorithms | Factorization of polynomials | [
"Mathematics",
"Technology"
] | 3,630 | [
"Polynomials",
"Computer algebra",
"Computational mathematics",
"Computer science",
"Arithmetic",
"Factorization",
"Algebra"
] |
32,918,892 | https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%208 | The transition from Windows 7 to Windows 8 introduced a number of new features across various aspects of the operating system. These include a greater focus on optimizing the operating system for touchscreen-based devices (such as tablets) and cloud computing.
Development platform
Language and standards support
Windows 8 introduces the new Windows Runtime (WinRT) platform, which can be used to create a new type of application officially known as Windows Store apps and commonly called Metro-style apps. Such apps run within a secure sandbox and share data with other apps through common APIs. WinRT, being a COM-based API, allows for the use of various programming languages to code apps, including C++, C++/CX, C#, Visual Basic .NET, or HTML5 and JavaScript. Metro-style apps are packaged and distributed via APPX, a new file format for package management. Unlike desktop applications, Metro-style apps can be sideloaded, subject to licensing conditions. Windows 8.1 Update allows for sideloading apps on all Windows 8.1 Pro devices joined to an Active Directory domain.
In Windows 8 up to two apps may snap to the side of a widescreen display to allow multi-tasking, forming a sidebar that separates the apps. In Windows 8.1, apps can continually be resized to the desired width. Snapped apps may occupy half of the screen. Large screens allow up to four apps to be snapped. Upon launching an app, Windows allows the user to pick which snapped view the app should open into.
The term "Metro-style apps" referred to "Metro", a design language prominently used by Windows 8 and other recent Microsoft products. Reports surfaced that Microsoft employees were told to stop using the term due to potential trademark issues with an unspecified partner. A Microsoft spokesperson however, denied these reports and stated that "Metro-style" was merely a codename for the new application platform.
Windows 8 introduces APIs to support near field communication (NFC) on Windows 8 devices, allowing functionality like launching URLs/applications and sharing of information between devices via NFC.
Windows Store
Windows Store is a digital distribution platform built into Windows 8, which in a manner similar to Apple's App Store and Google Play, allows for the distribution and purchase of apps designed for Windows 8. Developers will still be able to advertise desktop software through Windows Store as well. To ensure that they are secure and of a high quality, Windows Store will be the only means of distributing WinRT-based apps for consumer-oriented versions of Windows 8.
In Windows 8.1, Windows Store features a redesigned interface with improved app discovery and recommendations and offers automatic updates for apps.
Shell and user interface
Windows 8 features a redesigned user interface built upon the Metro design language, with optimizations for touchscreens.
Metro-style apps can either run in a full-screen environment, or be snapped to the side of a screen alongside another app or the desktop; snapping requires a screen resolution of 1366×768 or higher. Windows 8.1 lowers the snapping requirement to a screen resolution of 1024x768. Users can switch between apps and the desktop by clicking on the top left corner or by swiping the left side of the touchscreen to invoke a sidebar that displays all currently opened Metro-style apps. Right-clicking on the upper left corner provides a context menu with options to switch between open apps. The traditional desktop is accessible from a tile on the Start screen or by launching a desktop app. The shortcut cycles through all programs, regardless of type.
The interface also incorporates a taskbar on the right side of the screen known as "the charms" (lowercase), which can be accessed from any app or the desktop by sliding from the right edge of a touchscreen or compatible touchpad, by moving the mouse cursor to one of the right corners of the screen, or by pressing . The charms include Search, Share, Start, Devices and Settings charms. The Start charm invokes or dismisses the Start screen. Other charms invoke context-sensitive sidebars that can be used to access app and system functionality. Because of the aforementioned changes involving the use of hot corners, user interface navigation in Windows 8 is fundamentally different when compared with previous versions of Windows. To assist new users of the operating system, Microsoft incorporated a tutorial that appears during the installation of Windows 8, and also during the first sign-in of a new user account, which visually instructs users to move their mouse cursor into any corner of the screen (or swipe the corners on devices with touchscreens) to interact with the operating system. The tutorial can be disabled so that it does not appear for new user accounts. Windows 8.1 introduces navigation hints with instructions that are displayed during the first use of the operating system, and also includes a help and support app.
In Windows 8.1, the aforementioned hotspots in the upper right and the upper left corners can be disabled.
Pressing or right-clicking on the bottom left corner of the screen opens the Quick Link menu. This menu contains shortcuts to frequently used areas such as Control Panel, File Explorer, Programs and Features, Run, Search, Power Options and Task Manager. In Windows 8.1, the Quick Link menu includes options to shut down or restart a device.
Windows 8.1 Update introduced changes that facilitate mouse-oriented means of switching between and closing Metro-style apps, patterned upon the mechanics used by desktop programs in the Windows user interlace. In lieu of the recent apps sidebar, computer icons for opened apps can be displayed on the taskbar; as with desktop programs, shortcuts to apps can also be pinned to the taskbar. When a mouse is connected, an auto-hiding titlebar with minimize and close buttons is displayed within apps when the mouse is moved toward the top of the screen.
Bundled apps
A number of apps are included in the standard installation of Windows 8, including Mail (an email client), People (a contact manager), Calendar (a calendaring app), Messaging (an IM client), Photos (an image viewer), Music (an audio player), Video (a video player), Camera (a webcam or digital camera client), SkyDrive, Reader (an e-book reader), and six other apps that expose Bing services (Search, News, Finance, Weather, Travel and Sports).
Windows 8.1 adds Calculator, Alarm Clock, Sound Recorder, Reading List, Food & Drink, Health & Fitness, Help + Tips, Scan, and a file manager integrated in the SkyDrive app.
Windows 8 also includes a Metro-style system component called PC Settings which exposes a small portion of Control Panel settings. Windows 8.1 improves this component to include more options that were previously exclusive to Control Panel. Windows 8.1 Update adds additional options to PC Settings.
Start screen
Windows 8 introduces a new form of start menu called Start screen, which resembles the home screen of Windows Phone, and is shown in place of the desktop on startup. The Start screen serves as the primary method of launching applications and consists of a grid of app tiles which can be arranged into columnar groups; groups can be arranged with or without group names. App tiles can either be small (taking up 1 square) or large (taking up 2 squares) in size and can also display dynamic content provided by their corresponding apps, such as notifications and slide shows. Users can arrange individual app tiles or entire groups. An additional section of the Start screen called "All Apps" can be accessed via a right click from the mouse or an upward swipe and will display all installed apps categorized by their names. A semantic zoom feature is available for both the Start screen and "All Apps" view which enables users to target a specific area or group on the screen. The Start screen can uninstall apps directly.
Windows 8.1 makes the following changes to the Start screen:
The "All Apps" section, now accessed with a hidden downward arrow or upward touch gesture, features a visible search bar which can display results for apps or other items. The section is dismissed by a similar button with an upward arrow. An option to display the "All Apps" section automatically instead of the Start screen is available.
On high-resolution display monitors with sufficiently large physical screen sizes, an option to display additional tiles on the Start screen is available.
Start screen tiles can be locked in place to prevent accidental manipulation of tiles.
The uninstall command allows Windows Store apps to be uninstalled from multiple computers.
More size options for live tiles on Start screen: small, medium, wide, and large. The "small" size is one quarter of the default size in Windows 8.
Expanded color options on the Start screen, which now allows users to customize a color and a shade of one's own choice instead of choosing from limited colors.
New background options for the Start screen, including animated backgrounds and the ability to use the desktop wallpaper.
Enhanced synchronization settings, including those for app tile arrangement, tile sizes, and background.
In a multi-monitor configuration, Windows 8.1 can optionally display the Start screen only on the primarily display monitor instead of the currently active monitor when the key is pressed.
Multiple desktop applications can be selected from the Start screen and pinned to the taskbar at once, or multiple desktop applications and Metro-style apps can be selected from the "All Apps" view and pinned to the Start screen at once. Windows 8.1 Update augments this capability by allowing Metro-style apps to be pinned to the taskbar. The Start menu in previous versions of Windows allowed only one desktop application to be selected and/or pinned at a time.
By default, Windows 8.1 no longer displays recently installed apps and their related entries on the Start screen; users must manually pin these items.
Windows 8.1 introduces options to categorize apps listed within the "All Apps" section of the Start screen. Apps can be categorized by their name, the date they were installed, their frequency of use, or based on their categories. When sorted by category, desktop applications can optionally be prioritized within the interface. Windows 8.1 Update allows additional app tiles to be displayed within the "All Apps" section of the Start screen.
The ability to highlight recently installed apps has been enhanced in Windows 8.1 Update, which now displays the total number of recently installed apps within the lower-left corner of the Start screen in addition to highlighting. In contrast, the Start menu interface included in previous versions of Windows only highlighted apps. Windows 8.1 Update also enables semantic zoom upon clicking or tapping the title of an app category.
Windows 8.1 reverts two changes that were featured in Windows 8. Windows 8 removed the Start button on the taskbar in favor of other ways of invoking the Start screen. Windows 8.1 restores this button. Windows 8 also showed the Start screen upon logon, as opposed to other editions of Windows that show the desktop. In Windows 8.1, user may now choose which one to see first. Windows 8.1 Update boots to the desktop by default on non-tablet devices and introduces the ability to switch to the taskbar from the Start screen or from an open Metro-style app by directing the mouse cursor toward the bottom of the screen.
Windows 8.1 introduces a new "slide to shutdown" option which allows users to drag their partially revealed lock screen image toward the bottom of the screen to shut down the operating system. Windows 8.1 Update introduces a visible power button on the Start screen. This power button does not appear on all hardware device types. By default, new account profiles in Windows 8.1 Update also receive four additional tiles pinned to the Start screen: This PC, PC Settings, Documents, and Pictures. In Windows RT, only the PC Settings tile is added.
Search
In Windows 8, searching from the Start screen or clicking on the Search charm will display search results within a full-screen interface. Unlike previous versions of Windows where searching from the Start menu returned results from multiple sources simultaneously, Windows 8 searches through individual categories: apps, settings, and files. By default, Windows 8 searches for apps after a user begins searching from the Start screen or Search charm, but can also search other categories from the user interface or via keyboard shortcuts. Pressing opens the Search charm to search for apps, searches for files, and searches for settings. Search queries can also be redirected between specific categories or apps after being entered. When searching for apps, Windows 8 will display a list of apps that support the Search charm; frequently used apps will be prioritized and users can pin individual apps so that they always appear. The Search charm can also search directly within apps if a user redirects an entered search query to a specific app or presses from within an app that is already open. When searching for files, Windows 8 will highlight words or phrases that match a search query and provide suggestions based on the content and properties of files that appear. Information about the files themselves, such as associated programs and sizes, appear directly beneath filenames. If a user hovers over a file with the mouse cursor or long presses with a finger a tooltip will appear and display additional information.
In Windows 8.1, searching no longer opens a full-screen interface; results are instead displayed in a Metro-style flyout interface. Windows 8.1 also reinstates unified local search results, and can optionally provide results from Bing. Dubbed "Smart Search," Windows 8.1 and Bing can optionally analyze a user's search habits to return relevant content that is stored locally and from the Internet. When enabled, Smart Search exposes additional search categories within the user interface: web images and web videos, and can be accessed via a new keyboard shortcut, . A new full screen "hero" interface powered by Bing can display aggregated multimedia (such as photos, YouTube videos, songs/albums on Xbox Music) and other content (such as news articles and Wikipedia entries) related to a search query. Like its predecessor, Windows 8.1 allows users to search through setting and file categories, but the option to search through a category for apps is removed from the interface; the keyboard shortcut previously associated with this functionality, , now displays unified search results. The Search charm also can no longer search from within apps directly or display a list of compatible apps. To search for content within apps, users must first open an app and, if available, use a search feature from within that app's interface.
Windows 8.1 Update enhances the Bing Smart Search feature by providing support for natural language queries, which can detect misspellings and display apps or settings relevant to a query. For example, typing "get apps for Windows" will display a shortcut to the Windows Store. Windows 8.1 Update also introduces a visible search button on the Start screen that acts as a shortcut to the Metro-style flyout interface.
The Kind property introduced in Windows Vista to express a more friendly notion of file type has been expanded to include support for Playlist (where items are playlists) in Windows 8. In Windows 8.1, Unknown (where the kind of item is not known) is also introduced.
User login
Windows 8 introduces a redesigned lock screen interface based on the Metro design language. The lock screen displays a customizable background image, the current date and time, notifications from apps, and detailed app status or updates. Two new login methods optimized for touch screens are also available, including a four-digit PIN, or a "picture password," which users allow the use of certain gestures performed on a selected picture to log in. These gestures will take into account the shape, the start and end points, as well as the direction. However, the shapes and gestures are limited to tapping and tracing a line or circle. Microsoft found that limiting the gestures increased the speed of sign-ins by three times compared to allowing freeform methods. Wrong gestures will always deny a login, and it will lock out the PC after five unsuccessful attempts, until a text password is provided.
Windows 8.1 introduces the ability to display a photo slide show on the lock screen. The feature can display images from local or remote directories, and includes additional options to use photos optimized for the current screen resolution, to disable the slide show while the device is running on battery power, and to display the lock screen slide show instead of turning off the screen after a period of user inactivity. The lock screen can also display interactive toast notifications. As examples, users can answer calls or instant messages received from Skype contacts, or dismiss alarm notifications from the lock screen. Users can also take photos without dismissing the lock screen.
Notifications
Windows 8 introduces new forms of notifications for Metro-style apps and for certain events in File Explorer.
Toast notifications: alert the user to specific events, such as the insertion of removable media
Tile notifications: display dynamic information on the Start screen, such as weather forecasts and news updates
Badge notifications: display numeric counters with a value from 1-99 that indicate certain events, such as the amount of unread e-mail messages or amount of available updates for a particular app. Additional information may also be displayed by a badge notification, such as the status of an Xbox Music app.
The PC Settings component includes options to globally disable all toast notifications, app notifications on the lock screen, or notification sounds; notifications can also be disabled on a per-app basis. In the Settings charm, Windows 8 provides additional options to suppress toast notifications during 1 hour, 3 hour, or 8 hour time intervals.
Windows 8.1 introduces a Quiet Hours feature, also available on Windows Phone, that allows users to suppress notifications based on the time of day (e.g., notifications can be disabled from 12:00 AM to 6:00 PM).
Microsoft account integration
Windows 8 allows users to link profiles with a Microsoft account to provide additional functionality, such as the synchronization of user data and settings, including those belonging to the desktop, and allows for integration with other Microsoft services such as Xbox Live, Xbox Music, Xbox Video (for gaming and multimedia) and SkyDrive online file storage.
Display screen
Windows 8 includes improved support for multi-monitor configurations; the taskbar can now optionally be shown on multiple displays, and each display can also show its own dedicated taskbar. In addition, options are available which can prevent taskbar buttons from appearing on certain monitors. Wallpapers can also be spanned across multiple displays, or each display can have its own separate wallpaper.
Windows 8.1 includes improved support for high-resolution monitors. A desktop scaling feature now helps resize the items on the desktop to solve the visibility problems on screens with a very high native resolution. Windows 8.1 also introduces per-display DPI scaling, and provides an option to scale to 200%.
File Explorer
Windows Explorer, which has been renamed as File Explorer, now incorporates a ribbon toolbar, designed to bring forward the most commonly used commands for easy access. The "Up" button (which advances the user back a level in the folder hierarchy) that was removed from Explorer after Windows XP has also been restored. Additionally, File Explorer features a redesigned preview pane that takes advantage of widescreen layouts. File Explorer also provides a built-in function for mounting ISO, IMG, and VHD files as virtual drives. For easier management of files and folders, Windows 8 introduces the ability to move selected files or folders via drag and drop from a parent folder into a subfolder listed within the breadcrumb hierarchy of the address bar in File Explorer.
Progress windows for file operations have also been redesigned; offering the ability to show multiple operations at once, a graph for tracking transfer speeds, and the ability to pause and resume a file transfer. A new interface has also been introduced for managing file name collisions in a file operation, allowing users to easily control which conflicting files are copied.
Libraries introduced in Windows 7 can now have their icons changed through the user interface; previously, users had to change icons by manually editing configuration files. With Windows 8.1., libraries can now also include removable storage devices; previously, adding removable storage devices to libraries was not supported. Windows 8.1, however, no longer creates any default libraries for new users, and does not display the Libraries listing in File Explorer by default. Instead, Windows 8.1 introduces shortcuts to the default user profile folders (Documents, Downloads, Pictures, etc.) within the This PC location of File Explorer. The libraries can be enabled in the Options menu.
HomeGroup has been updated in Windows 8 to display in the navigation pane the user profile photo of each member sharing content in the homegroup; in Windows 7, only a generic user icon for each user was displayed in the navigation pane.
Internet Explorer
Windows 8 ships with Internet Explorer 10, which can run as either a desktop program (where it operates similarly to Internet Explorer 9), or as an app with a new full-screen interface optimized for use on touchscreens. Internet Explorer 10 also contains an integrated version of Flash Player, which will be available in full on the desktop, and in a limited form within the "Metro" app.
Windows 8.1 ships with Internet Explorer 11 which includes tab syncing, WebGL and SPDY support, along with expanded developer tools. The Metro version also adds access to favorites and split-screen snapping of multiple tabs; an additional option to always display the address bar and tabs is also available. The Metro version can also detect and highlight phone numbers on a web page and turn them into clickable links that, when clicked, initiate a call with a compatible app such as Skype.
Task Manager
Windows 8 includes an overhauled version of Task Manager, which features the following changes:
Task Manager defaults to a simple view which only displays a list of computer programs with a window. The expanded view is an updated version of the previous Task Managers with several tabs.
Resource utilization in the Processes tab is shown using a heat map, with darker shades of yellow representing heavier use.
The Performance tab is split into CPU, memory, disk, Ethernet, and wireless network (if applicable) sections. There are overall graphs for each, and clicking on one reaches details for that particular resource
The CPU tab no longer displays individual graphs for every logical processor on the system by default. It may show data for each NUMA node.
The CPU tab displays simple percentages on heat-mapping tiles to display utilization for systems with many (64 or more, up to 640) logical processors. The color used for these heat maps is blue, with darker color again indicating heavier utilization
Hovering the cursor over any logical processor's graph shows the NUMA node of that processor and its ID.
The new Startup tab lists startup programs and their impact on boot time. Windows Vista included a feature to manage startup applications that was removed in Windows 7.
The Processes tab now lists application names, application status, and overall usage data for CPU, memory, hard disk, and network resources for each process. A new option to restart File Explorer upon its selection is provided.
Task manager recognizes when a Windows Runtime application is in "Suspended" status.
The process information found in the Processes tab of the older Task Manager can be found in the Details tab.
Touch keyboard
Windows 8 introduces a revised virtual (also known as on-screen) keyboard interface optimized for touchscreen devices that includes wider spacing between keys and is designed to prevent common typing errors that occur while using touchscreens. Pressing and holding down a key reveals related keys which can be accessed via a press or swipe, and suggestions for incomplete words are available. Emoji characters are also supported. Windows 8.1 introduces the ability to swipe the space bar in the desired direction of a suggested word to switch between on-screen suggestions.
Windows 8.1 Update introduces a new gesture that allows users to tap twice and hold the second tap to drag and drop highlighted text or objects. A visible option to hide or show the virtual keyboard is also available.
Password input
Windows 8 displays a "peek" button for password text boxes which can optionally allows users to view passwords as they are entered in order to ensure that they are typed correctly. The feature can be disabled via Group Policy.
Infrastructure
File History
File History is a continuous data protection component. File History automatically creates incremental backups of files stored in Libraries, including those for users participating in a HomeGroup, and user-specified folders to a different storage device (such as another internal or external hard drive, Storage Space, or network share). Specific revisions of files can then be tracked and restored using the "History" functions in File Explorer. File History replaces both Backup and Restore and Shadow Copy (known in Windows Explorer as "Previous Versions") as the main backup tool of Windows 8. Unlike Shadow Copy, which performs block-level tracking of files, File History utilizes the USN Journal to track changes, and simply copies revisions of files to the backup location. Unlike Backup and Restore, File History cannot back up files encrypted with EFS.
Hardware support
Windows 8 adds native support for USB 3.0, which allows for faster data transfers and improved power management with compatible devices. This native stack includes support for the newer, more efficient USB Attached SCSI (UAS) protocol, which is turned on by default even for USB 2.0 devices, although these must however have supporting firmware/hardware to take advantage of it. Windows 8.1 enhanced support for power saving features of USB storage devices, but this addition was not without problems, with some poorly implemented hardware degrading user experience by hangs and disconnects.
Support for Advanced Format hard drives without emulation is included for the first time.
A port of Windows for the ARM architecture was also created for Windows 8. Known as Windows RT, it is specifically optimized for mobile devices such as tablets. Windows RT is only able to run third-party Windows Store apps, but comes with a preinstalled version of Office 2013 specially redesigned for touchscreen use.
Windows 8.1 improves hardware support with DirectX 11.2.
Windows 8.1 adds native support for NVM Express.
Windows 8 adds support for UEFI Secure Boot, and TPM 2.0. UEFI with secure boot enabled is a requirement on computers shipped with Windows 8.
Installation
Alongside the existing WinPE-based Windows Setup (which is used for installations that are initiated by booting from DVD, USB, or network), Upgrade Assistant is offered to provide a simpler and faster process for upgrading to Windows 8 from previous versions of Windows. The program runs a compatibility check to scan the device's hardware and software for Windows 8 compatibility, and then allows the user to purchase, download, generate installation media with a DVD or USB flash drive and install Windows 8. The new installation process also allows users to transfer user data into a clean installation of Windows. A similar program, branded as Windows 8 Setup, is used for installations where the user already has a product key.
Windows 8 implements OEM Activation 3.0, which allows Microsoft to digitally distribute Windows licenses to original equipment manufacturers (OEMs). Windows 8 devices store product keys directly in firmware rather than printed on a Certificate of Authenticity (CoA) sticker. This new system is designed to prevent OEM product keys from being used on computers they are not licensed for, and also allows the installer to automatically detect and accept the product key in the event of re-installation.
Windows 8.1 Update adds a new installation mode known as "WIMBoot", where the WIM image that contains the Windows installation is left compressed rather than being extracted, and the system is configured to use files directly from within the system image. This installation method was primarily designed to reduce the footprint of the Windows installation on devices with small amounts of storage. The system image also doubles as the recovery image, speeding up Refresh and Reset operations. It is only supported in systems with a Unified Extensible Firmware Interface (UEFI), where Windows is located on a solid-state drive or eMMC.
Networking
Windows 8 incorporates improved support for mobile broadband as a "first-class" method of internet connectivity. Upon the insertion of a SIM card, the operating system will automatically determine the user's carrier and configure relevant connection settings using an Access Point Name database. The operating system can also monitor mobile data usage, and changes its behavior accordingly to reduce bandwidth use on metered networks. Carriers can also offer their own dedicated Windows Store apps for account management, which can also be installed automatically as a part of the connection process. This functionality was demonstrated with an AT&T app, which could also display monthly data usage statistics on its live tile. Windows 8 also reduces the need for third-party drivers and software to implement mobile broadband by providing a generic driver, and by providing an integrated airplane mode option.
Windows 8 supports geolocation. Windows 8.1 adds support for NFC printing, mobile broadband tethering, auto-triggered VPN and geofencing.
Windows 8.1 Update provides options for the "Network" Settings charm to show the estimated data usage for a selected network, and to designate a network as a metered connection.
Startup
Windows 8 defaults to a "Fast startup" mode; when the operating system is shut down, it hibernates the kernel, allowing for a faster boot on the subsequent startup. These improvements are further compounded by using all processor cores during startup by default. To create a more seamless transition between the Power-on self-test and Windows startup process, manufacturers' logos can now be shown on the Windows boot screen on compatible systems with UEFI.
The Advanced Startup menu now uses a graphical interface with mouse and touch support in place of the text-based menu used by previous versions. As the increased boot speed of devices with UEFI can make it difficult to access it using keyboard shortcuts during boot, the menu can now be launched from within Windows—using either the PC Settings app, holding down Shift while clicking the Restart option in the Power menu, or by using the new "-o" switch on shutdown.exe. though the legacy version of the Advanced Startup menu can still be enabled instead.
UEFI firmware can be exposed to Windows via class drivers. Updated firmware capsules can be distributed as an update to this "driver" in a signed package with an INF file and security catalog, similarly to those for other devices. When the "driver" is installed, Windows prepares the update to be installed on the next boot, and Windows Boot Manager renders status information on the device's boot screen.
Video subsystem
Windows 8 includes WDDM 1.2 and DirectX Graphics Infrastructure (DXGI) 1.2. The Desktop Window Manager now runs at all times (even on systems with unsupported graphics cards; where DWM now also supports software rendering), and now also includes support for stereoscopic 3D content.
Other major features include preemptive multitasking with finer granularity (DMA buffer, primitive, triangle, pixel, or instruction-level), reduced memory footprint, improved resource sharing, and improved timeout detection and recovery. 16-bit color surface formats (565, 5551, 4444) are mandatory in Windows 8, and Direct3D 11 Video supports YUV 4:4:4/4:2:2/4:2:0/4:1:1 video formats with 8, 10, and 16-bit precision, as well as 4 and 8-bit palettized formats. Display-only and render-only WDDM drivers were also supported. Display-only WDDM drivers allow basic 2D-only video adapters and virtual displays to function while contents are rendered by existing renderers or a software rasterizer. Render-only WDDM drivers will render screen contents to specified display processors, commonly seen on laptops with dedicated GPUs. Otherwise, a full graphics WDDM driver will function as both of the display and rendering.
Windows 8.1 introduces WDDM 1.3 and adds support for Miracast, which enables wireless or wired delivery of compressed standard- or high-definition video to or from desktops, tablets, mobile phones, and other devices.
Printing
Windows 8 adds support for printer driver architecture version 4. This adds a Metro friendly interface as well as changes the way the architecture was written.
Windows 8.1 adds support for Wi-Fi Direct printing, NFC printing, and native APIs for 3D printing through the XML-based 3D Manufacturing Format (3MF).
Windows PowerShell
Windows PowerShell is Microsoft's task automation framework, consisting of a command-line shell and associated scripting language built on .NET Framework. PowerShell provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems. Windows 8 includes Windows PowerShell v3.0. Windows 8.1 comes with Windows PowerShell v4.0 which features a host of new commands for managing the Start screen, Windows Defender, Windows components, hardware and network.
Windows To Go
Windows To Go is a feature exclusive to the Enterprise version of Windows 8 which allows an organization to provision bootable USB flash drives with a Windows installation on them, allowing users to access their managed environment on any compatible PC. Windows 8.1 updates this feature to enable booting from a USB composite device with a storage and a smart card function.
Maintenance
The Action Center introduced in Windows 7 is expanded to include controls and notifications for new categories, including: device software, drive health, File History, HomeGroup status, Microsoft account status, SmartScreen, and Windows activation. Additionally, there is a new Automatic Maintenance feature, which can periodically perform a number of maintenance tasks, such as diagnostics, malware scans, and updates to improve system performance.
PC Settings app in Windows 8 can be used to interact with Windows Update, although the traditional interface from Control Panel is retained. Windows 8 is able to distribute firmware updates on compatible devices and can be configured not to automatically download Windows updates over metered networks. A new set of Windows PowerShell cmdlets enable adding or removing features of Windows, as Programs and Features applet in Control Panel does. The Deployment Image Servicing and Management (DISM) utility in Windows 8 includes all features that were previously available in ImageX and is able to periodically check for component store corruption and repair it. It can report the amount of disk space in use by WinSxS folder and can also determine if a cleanup should be performed.
Windows 8 can now detect when a system is experiencing issues that have been preventing the system from functioning correctly, and automatically launch the Advanced Startup menu to access diagnostic and repair functions.
For system recovery, Windows 8 introduced new functions known collectively as "Push-button reset", which allows a user to re-install Windows without needing to use installation media. The feature consists of "Reset" and "Refresh" functions, accessible from within the advanced boot options menu and PC Settings. Both of these options reboot the system into the Windows Recovery Environment to perform the requested operation; Refresh preserves user profiles, settings, and Windows Store apps, while Reset performs a clean installation of Windows. The reset function may also perform specialized disk wiping and formatting procedures for added security. Both operations will remove all installed desktop applications from the system. Users can also create a custom disk image for use with Refresh and Reset.
Security
Address space layout randomization improvements
Address space layout randomization (ASLR) introduced in Windows Vista was improved in Windows 8 and has been updated in Windows 8.1 to allow randomization to be unique across devices.
Biometrics
Windows 8 introduces virtual smart card support. A digital certificate of a smart card can be stored onto a user's machine and protected by the Trusted Platform Module, thereby eliminating the need for the user to physically insert a smart card, though entering a PIN is still required. Virtual smart card support enables new two-factor authentication scenarios. Windows 8.1 improves this functionality by simplifying the device enrollment process for virtual smart cards and introduces additional virtual smart card functionality such as certificate attestation for Metro-style applications, and enrollment and management features via WinRT APIs.
Windows 8.1 features pervasive support for biometric authentication throughout the operating system, includes a native fingerprint registration feature, and enables the use of a fingerprint for tasks such as signing into a device, purchasing apps from the Windows Store, and consenting to authentication prompts (e.g., User Account Control). Windows 8.1 also introduces new WinRT APIs for biometrics.
Device encryption
On Windows RT, logging in with a Microsoft account automatically activates passive device encryption, a feature-limited version of BitLocker which seamlessly encrypts the contents of mobile devices to protect their contents. On Windows 8.1, device encryption is similarly available for x86-based Windows devices, automatically encrypting user data as soon as the operating system is configured. When a user signs in with a Microsoft account or on a supported Active Directory network, a recovery key is generated and saved directly to the user's account. Unlike BitLocker, device encryption on x86-based devices requires that the device meet the Connected Standby specifications (which among other requirements, requires that the device use solid-state storage and have RAM soldered directly to the motherboard) and have a Trusted Platform Module (TPM) 2.0 chip.
Device lockdown
Windows 8.1 introduces Assigned Access, formerly called Kiosk mode, which restricts the Windows device to a running a single predetermined Metro-style app.
Windows 8.1 was slated to include a Provable PC Health feature which would allow owners to subject devices connected to a network to remote PC analysis. Under Provable PC Health, connected devices would periodically send various configuration-related information to a cloud service, which would provide suggestions for remediation upon detection of an issue. However, the feature was dropped before the operating system's general availability.
Family Safety
Windows 8 integrates Windows Live Family Safety into the operating system, allowing parents to restrict user activity via web filtering, application restriction, and computer usage time limits. Certain parental controls functionality introduced in Windows Vista was made unavailable in Windows 7 in favor of Windows Live Family Safety. A notable change in Family Safety is that administrators can now specify time periods for computer usage. For example, an administrator can restrict a user account so that it can only remained signed in for a total time period of one hour. In previous versions of Windows, administrators could only restrict accounts based on the time of day.
Protected processes
Protected processes introduced in Windows Vista for digital rights management have been extended in Windows 8.1 to support additional scenarios along with a new Protected Process Light scheme. In Windows Vista, processes for digital rights management were either protected or unprotected. With the new scheme in Windows 8.1, processes can be assigned varying levels of protection, and core operating system components such as the Local Security Authority Subsystem Service can be protected by this scheme to prevent reading memory and code injection by non-protected processes.
Startup security
Windows 8 introduced four new features to offer security during the startup process: UEFI secure boot, Trusted Boot, Measured Boot and Early Launch Anti-Malware (ELAM).
Of the four, secure boot is not a native feature of Windows 8; it is part of UEFI. At startup, the UEFI firmware checks the validity of a digital signature present in the Windows Boot Loader (bootmgfw.efi), which is signed with Microsoft's public key. This signature check happens every time the computer is booted and prevents malware from infecting the system before the operating system loads. The UEFI firmware will only allow signatures from keys that has been enrolled into its database, and, prior to Windows 8 release, Microsoft announced that certified computers had to ship with the Microsoft's public key enrolled and with secure boot enabled by default. However, following the announcement, the company was accused by critics and free and open-source software advocates (including the Free Software Foundation) of trying to use the secure boot to hinder or outright prevent the installation of alternative operating systems such as Linux. Microsoft denied that the secure boot requirement was intended to serve as a form of lock-in, and clarified that x86 certified systems (but not ARM systems) must allow secure boot to enter custom mode or be disabled.
Trusted Boot is a feature of Windows boot loader and ensures the integrity of all Microsoft components loaded into memory, including ELAM, which loads last. ELAM ensures that all third-party boot drivers are trustworthy; they are not loaded if ELAM check fails. ELAM can use either Windows Defender or a third-party compatible antivirus. During the 2011 Build conference in Anaheim, California, Microsoft showed a Windows 8 machine that can prevent an infected USB flash memory from compromising the boot process.
Measured Boot can attest to the state of a client machine by sending details about its configuration to a remote machine. The feature relies on the attestation feature of the Trusted Platform Module and is designed to verify the boot integrity of the client.
Windows Platform Binary Table
Windows Platform Binary Table allows executable files to be stored within UEFI firmware for execution on startup. Microsoft states this feature is meant to "allow critical software to persist even when the operating system has changed or been reinstalled in a 'clean' configuration"; specifically, anti-theft security software; but this has also been mis-used, including by Lenovo with their "Lenovo Service Engine" feature.
Windows Defender
In Windows 7, Windows Defender was an anti-spyware solution. Windows 8 introduced Windows Defender as an antivirus solution (and as the successor of Microsoft Security Essentials), which provides protection against a broader range of malware. It was the first time that a standard Windows install included an antivirus solution. Windows 8.1 augments it with network behavior monitoring, a new feature for Windows Defender. For Microsoft Security Essentials, this feature has been present since July 2010.
Keyboard shortcuts
Windows 8 includes various features that can be controlled through keyboard shortcuts.
Displays the Charms Bar.
Opens the Search charm to search for files.
Opens the Share charm.
Opens the Settings charm.
Switches between the active app and a snapped app.
Opens the Devices charm.
Locks the current display orientation.
Opens the Search charm to search for apps.
Shows available app commands.
and respectfully activate and deactivate semantic zoom.
Switches the user's IME.
Reverts to a previous IME.
Cycles through open Metro-style apps.
Cycles through open Metro-style apps and snaps them as they are cycled.
Cycles through open Metro-style apps in reverse order.
In a multi-monitor configuration, moves the Start screen and open Metro-style apps to the display monitor on the left.
In a multi-monitor configuration, moves the Start screen and open Metro-style apps to the display monitor on the right.
Initiates the Peek feature introduced in Windows 7.
Snaps an open Metro-style app to the left side of the screen.
Snaps an open Metro-style app to the right side of the screen.
Takes a screenshot of the entire screen and saves it to a Screenshots folder within the Pictures directory. On a tablet, this feature can be accessed by simultaneously pressing a button with the Windows logo and a button that lowers the volume of the device.
Virtualization
Hyper-V, a native hypervisor previously offered only in Windows Server, is included in Windows 8 Pro, replacing Windows Virtual PC, a hosted hypervisor.
Storage
Storage Spaces
Storage Spaces is a storage virtualization technology which succeeds Logical Disk Manager and allows the organization of physical disks into logical volumes similar to Logical Volume Manager (Linux), RAID0, RAID1 or RAID5, but at a higher abstraction level.
A storage space behaves like a physical disk to the user, with thin provisioning of available disk space. The spaces are organized within a storage pool, i.e. a collection of physical disks, that can span multiple disks of different sizes, performance or technology (USB, SATA, SAS). The process of adding new disks or replacing failed or older disks is fully automatic, but can be controlled with PowerShell commands. The same storage pool can host multiple storage spaces. Storage Spaces have built-in resiliency from disk failures, which is achieved by either disk mirroring or striping with parity across the physical disks. Each storage pool on the ReFS filesystem is limited to 4 PB (4096 TB), but there are no limits on the total number of storage pools or the number of storage spaces within a pool.
A review in Ars Technica concluded that "Storage Spaces in Windows 8 is a good foundation, but its current iteration is simply too flawed to recommend in most circumstances." Microsoft MVP Helge Klein also criticized Storage Spaces as unsuitable for its touted market of SOHO users.
Storage Spaces was further enhanced in Windows Server 2012 R2 with tiering and caching support, which can be used for caching to SSD; these new features were not added to Windows 8.1. Instead Windows 8.1 gained support for specific features of SSHD drives, e.g. for host-hinted LBA caching (TP_042v14_SATA31_Hybrid Information).
NVM Express
Windows 8.1 gained support for NVM Express (NVMe), a new industry standard protocol for PCIe-attached storage, such as PCIe flash cards.
Windows 8.1 also supports the TRIM command for PCI Express SSDs based on NVMe (Windows 7 supported TRIM only for AHCI/SATA) drives and only the ones which were connected internally via the M.2 or SATA/IDE connectors. Windows 8.1 supports the SCSI unmap command which is a full analog of the SATA TRIM command for devices that use the SCSI driver stack. If external SSD drives as well as the device firmware in the bridge chip both support TRIM, Windows 8.1 can perform a TRIM operation on these external SATA and NVMe SSDs that connect via USB as long as they use the USB Attached SCSI Protocol (UASP).
Windows 8.1 also introduces a manual TRIM function via Microsoft Drive Optimizer which can perform an on-demand user-requested TRIM operation on internal and external SSDs. Windows 7 only had automatic TRIM for internal SATA SSDs built into system operations such as Delete, Format, Diskpart etc.
However, Windows 8.1 built-in NVMe driver does not support NVMe passthrough protocol. Support for NVMe passthrough protocol was added in Windows 10.
See also
Windows Server 2012
References
External links
Building Windows 8 Blog
Windows 8
Windows 8 | Features new to Windows 8 | [
"Technology"
] | 9,734 | [
"Software features"
] |
32,922,845 | https://en.wikipedia.org/wiki/RaptorX | RaptorX is a software and web server for protein structure and function prediction that is free for non-commercial use. RaptorX is among the most popular methods for protein structure prediction. Like other remote homology recognition and protein threading techniques, RaptorX is able to regularly generate reliable protein models when the widely used PSI-BLAST cannot. However, RaptorX is also significantly different from profile-based methods (e.g., HHPred and Phyre2) in that RaptorX excels at modeling of protein sequences without a large number of sequence homologs by exploiting structure information. RaptorX Server has been designed to ensure a user-friendly interface for users inexpert in protein structure prediction methods.
Description
The RaptorX project was started in 2008 and RaptorX Server was released to the public in 2011.
Standard usage
After pasting a protein sequence into the RaptorX submission form, a user will typically wait a couple of hours (depending on sequence length) for a prediction to complete. An email is sent to the user together with a link to a web page of results. RaptorX Server currently generates the following results: 3-state and 8-state secondary structure prediction, sequence-template alignment, 3D structure prediction, solvent accessibility prediction, disorder prediction and binding site prediction. The predicted results are displayed to support visual examination. The result files are also available for download.
RaptorX Server also produces some confidence scores indicating the quality of the predicted 3D models (in the absence of their corresponding native structures). For example, it produces P-value for relative global quality of a 3D model, global distance test (GDT) and uGDT (unnormalized-GDT) for absolute global quality of a 3D model and per-position root mean square deviation (RMSD) for absolute local quality at each residue of a 3D model.
Applications and performance
Applications of RaptorX include protein structure prediction, function prediction, protein sequence-structure alignment, evolutionary classification of proteins, guiding site-directed mutagenesis and solving protein crystal structures by molecular replacement. In the Critical Assessment of Structure Prediction (CASP) CASP9 blind protein structure prediction experiment, RaptorX was ranked 2nd out of about 80 automatic structure prediction servers. RaptorX also generated the best alignments for the 50 hardest CASP9 template-based modeling (TBM) targets. In CASP10, RaptorX is the only server group among the top 10 human/server groups for the 15 most difficult CASP10 TBM targets.
History
RaptorX is the successor to the RAPTOR protein structure prediction system. RAPTOR was designed and developed by Dr. Jinbo Xu and Dr. Ming Li at the University of Waterloo. RaptorX was designed and developed by a research group led by Prof. Jinbo Xu at the Toyota Technological Institute branch at Chicago.
See also
Protein structure prediction
CASP
List of protein structure prediction software
References
External links
CASP website
other structural bioinformatics software
Structural bioinformatics software
Computational science | RaptorX | [
"Mathematics"
] | 626 | [
"Computational science",
"Applied mathematics"
] |
32,923,154 | https://en.wikipedia.org/wiki/Edotreotide | Edotreotide (USAN, also known as (DOTA0-Phe1-Tyr3) octreotide, DOTA-TOC, DOTATOC) is a substance which, when bound to various radionuclides, is used in the treatment and diagnosis of certain types of cancer. When used therapeutically it is an example of peptide receptor radionuclide therapy.
Yttrium-90
A phase I clinical trial of yttrium-90 labelled edotreotide concluded in 2011, aiming to investigated effects in young cancer patients (up to 25 years of age). Specific cancers being included in the trial include neuroblastoma, childhood brain tumours and gastrointestinal cancer.
A phase II trial for the use of 90Y DOTA-TOC for patients with metastatic carcinoid, where octreotide treatment was no longer effective, also reported results in 2010.
Lutetium-177
Lutetium-177 labelled edotreotide (177Lu-DOTA-TOC), with the trade name Solucin, is the subject of a phase 3 clinical trial for treatment of GEP-NETs. It was granted orphan drug designation by the European Medicines Agency in 2014.
See also
DOTA-TATE, a similar compound
References
Nuclear medicine procedures
Radiopharmaceuticals
Somatostatin inhibitors
Macrocycles
Cyclic peptides
X
DOTA (chelator) derivatives | Edotreotide | [
"Chemistry"
] | 295 | [
"Medicinal radiochemistry",
"Organic compounds",
"Macrocycles",
"Radiopharmaceuticals",
"Chemicals in medicine"
] |
22,765,181 | https://en.wikipedia.org/wiki/Anthony%20Ichiro%20Sanda | is a Japanese-American particle physicist. Along with Ikaros Bigi, he was awarded the 2004 Sakurai Prize for his work on CP violation and B meson decays.
Academic life
Sanda studied at the University of Illinois (B.S. 1965) and Princeton University (Ph.D. 1969). He was a researcher at Columbia University from 1971 to 1974 and Fermi National Accelerator Laboratory. From 1974 to 1992 he was an Assistant Professor and then associate professor at Rockefeller University. From 1992 he was a professor of physics at Nagoya University. Since 2006 he is a Professor Emeritus at Nagoya University and a professor at Kanagawa University. Since 2007 he is also a Program Officer of the Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo. His major works are the proposal of a renormalizable gauge fixing method in broken gauge symmetric theory and the development of the theory of CP violations in B meson decays that has proven the Kobayashi-Maskawa Theory and has given a strong motivation for the experiments in Belle at KEK, Japan and BaBar at SLAC National Accelerator Laboratory, USA as well as fixing the necessary parameters of the accelerators to perform the experiments.
Religious life
As a devout Roman Catholic, Sanda is an ordained permanent deacon at St. Mary's Cathedral in Tokyo. He is also the author of the book "As a Scientist, Why Do I Believe in God", which describes his relationship between physics and Christianity.
Recognition
Inoue Prize for Science (1993)
Nishina Memorial Prize (1997)
Chunichi Shimbun Prize (2002)
Medal with Purple Ribbon (2002)
Sakurai Prize (2004)
Shuji Orito Prize (2015)
St. Albert Award
References
I. I. Bigi and A. I. Sanda, CP Violation (Cambridge University Press, 1999), .
External links
ArXiv papers
Scientific articles of Anthony I. Sanda (SLAC database)
Nagoya University Physics Department Homepage "History/Legacy"
Japanese physicists
1944 births
Living people
Princeton University alumni
University of Illinois alumni
Particle physicists
Columbia University staff
Rockefeller University faculty
Academic staff of Nagoya University
Academic staff of Kanagawa University
Theoretical physicists
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Japanese Roman Catholics
Catholic clergy scientists | Anthony Ichiro Sanda | [
"Physics"
] | 464 | [
"Theoretical physics",
"Theoretical physicists",
"Particle physics",
"Particle physicists"
] |
22,767,922 | https://en.wikipedia.org/wiki/FEHM | FEHM is a groundwater model that has been developed in the Earth and Environmental Sciences Division at Los Alamos National Laboratory over the past 30 years. The executable is available free at the FEHM Website. The capabilities of the code have expanded over the years to include multiphase flow of heat and mass with air, water, and CO2, methane hydrate, plus multi-component reactive chemistry and both thermal and mechanical stress. Applications of this code include simulations of: flow and transport in basin scale groundwater systems
, migration of environmental isotopes in the vadose zone, geologic carbon sequestration, oil shale extraction, geothermal energy, migration of both nuclear and chemical contaminants, methane hydrate formation, seafloor hydrothermal circulation, and formation of karst. The simulator has been used to generate results for more than 100 peer reviewed publications which can be found at FEHM Publications.
Abstract
The Subsurface Flow and Transport Team at the Los Alamos National Laboratory (LANL) has been involved in large scale projects including performance assessment of Yucca Mountain, Environmental Remediation of the Nevada Test Site, the LANL Groundwater Protection Program and geologic CO2 sequestration. Subsurface physics has ranged from single fluid/single phase fluid flow when simulating basin scale groundwater aquifers to multi-fluid/multi-phase fluid flow when simulating the movement of air and water (with boiling and condensing) in the unsaturated zone surrounding a potential nuclear waste storage facility. These and other projects have motivated the development of software to assist in both scientific discovery and technical evaluation. LANL's FEHM (Finite Element Heat and Mass) computer code simulates complex coupled subsurface processes as well flow in large and geologically complex basins. Its development has spanned several decades; a time over which the art and science of subsurface flow and transport simulation has dramatically evolved. For most early researchers, models were used primarily as tools for understanding subsurface processes. Subsequently, in addition to addressing purely scientific questions, models were used in technical evaluation roles. Advanced model analysis requires a detailed understanding of model errors (numerical dispersion and truncation) as well as those associated with the application (conceptual and calibration) Application errors are evaluated through exploration of model and parameter sensitivities and uncertainties. The development of FEHM has been motivated subsurface physics of applications and also by the requirements of model calibration, uncertainty quantification, and error analysis. FEHM possesses unique features and capabilities that are of general interest to the subsurface flow and transport community and it is well suited to hydrology, geothermal, petroleum reservoir applications, and CO2 sequestration.
Commercialization
Recently FEHM has been embedded into SVOFFICE™5/WR from SoilVision Systems Ltd, a GUI driven water resources numerical modeling framework. This marriage of GUI functionality with powerful underlying solvers and complex physics is leading to a new generation of capabilities with applications to a range of hydrogeological problems. Details can be found at the SoilVision SVOFFICE™5/WR website
See also
Aquifer
Hydrogeology
Groundwater
Groundwater flow equation
Groundwater energy balance
Watertable control
Groundwater drainage by wells
Salinity model
References
External links
More information on this versatile model can be found at:
FEHM Website
FEHM Review Article
FEHM Flyer
SoilVision Systems Ltd. website
Hydraulic engineering
Scientific simulation software
Hydrology models | FEHM | [
"Physics",
"Engineering",
"Biology",
"Environmental_science"
] | 701 | [
"Hydrology",
"Biological models",
"Environmental modelling",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydrology models",
"Hydraulic engineering"
] |
22,771,327 | https://en.wikipedia.org/wiki/Mie%E2%80%93Gr%C3%BCneisen%20equation%20of%20state | The Mie–Grüneisen equation of state is an equation of state that relates the pressure and volume of a solid at a given temperature. It is used to determine the pressure in a shock-compressed solid. The Mie–Grüneisen relation is a special form of the Grüneisen model which describes the effect that changing the volume of a crystal lattice has on its vibrational properties. Several variations of the Mie–Grüneisen equation of state are in use.
The Grüneisen model can be expressed in the form
where is the volume, is the pressure, is the internal energy, and is the Grüneisen parameter which represents the thermal pressure from a set of vibrating atoms. If we assume that is independent of and , we can integrate Grüneisen's model to get
where and are the pressure and internal energy at a reference state usually assumed to be the state at which the temperature is 0K. In that case p0 and e0 are independent of temperature and the values of these quantities can be estimated from the Hugoniot equations. The Mie–Grüneisen equation of state is a special form of the above equation.
History
Gustav Mie, in 1903, developed an intermolecular potential for deriving high-temperature equations of state of solids. In 1912, Eduard Grüneisen extended Mie's model to temperatures below the Debye temperature at which quantum effects become important. Grüneisen's form of the equations is more convenient and has become the usual starting point for deriving Mie–Grüneisen equations of state.
Expressions for the Mie–Grüneisen equation of state
A temperature-corrected version that is used in computational mechanics has the form
where is the bulk speed of sound, is the initial density, is the current density, is Grüneisen's gamma at the reference state, is a linear Hugoniot slope coefficient, is the shock wave velocity, is the particle velocity, and is the internal energy per unit reference volume. An alternative form is
A rough estimate of the internal energy can be computed using
where is the reference volume at temperature , is the heat capacity and is the specific heat capacity at constant volume. In many simulations, it is assumed that and are equal.
Parameters for various materials
Derivation of the equation of state
From Grüneisen's model we have
where and are the pressure and internal energy at a reference state. The Hugoniot equations for the conservation of mass, momentum, and energy are
where ρ0 is the reference density, ρ is the density due to shock compression, pH is the pressure on the Hugoniot, EH is the internal energy per unit mass on the Hugoniot, Us is the shock velocity, and Up is the particle velocity. From the conservation of mass, we have
Where we defined , the specific volume (volume per unit mass).
For many materials Us and Up are linearly related, i.e., where C0 and s depend on the material. In that case, we have
The momentum equation can then be written (for the principal Hugoniot where pH0 is zero) as
Similarly, from the energy equation we have
Solving for eH, we have
With these expressions for pH and EH, the Grüneisen model on the Hugoniot becomes
If we assume that and note that , we get
The above ordinary differential equation can be solved for e0 with the initial condition e0 = 0 when V = V0 (χ = 0). The exact solution is
where Ei[z] is the exponential integral. The expression for p0 is
For commonly encountered compression problems, an approximation to the exact solution is a power series solution of the form
and
Substitution into the Grüneisen model gives us the Mie–Grüneisen equation of state
If we assume that the internal energy e0 = 0 when V = V0 () we have A = 0. Similarly, if we assume p0 = 0 when V = V0 we have B = 0. The Mie–Grüneisen equation of state can then be written as
where E is the internal energy per unit reference volume. Several forms of this equation of state are possible.
If we take the first-order term and substitute it into equation (), we can solve for C to get
Then we get the following expression for p:
This is the commonly used first-order Mie–Grüneisen equation of state.
See also
Impact (mechanics)
Shock wave
Shock (mechanics)
Shock tube
Hydrostatic shock
Viscoplasticity
References
Continuum mechanics
Solid mechanics
Equations of state | Mie–Grüneisen equation of state | [
"Physics"
] | 929 | [
"Solid mechanics",
"Equations of physics",
"Continuum mechanics",
"Statistical mechanics",
"Classical mechanics",
"Mechanics",
"Equations of state"
] |
22,773,636 | https://en.wikipedia.org/wiki/Non-topological%20soliton | In quantum field theory, a non-topological soliton (NTS) is a soliton field configuration possessing, contrary to a topological one, a conserved Noether charge and stable against transformation into usual particles of this field for the following reason. For fixed charge Q, the mass sum of Q free particles exceeds the energy (mass) of the NTS so that the latter is energetically favorable to exist.
The interior region of an NTS is occupied by vacuum different from the ambient vacuum. The vacuums are separated by the surface of the NTS representing a domain wall configuration (topological defect), which also appears in field theories with broken discrete symmetry. Infinite domain walls contradict cosmology, but the surface of an NTS is closed and finite, so its existence would not be contradictory. If the topological domain wall is closed, it shrinks because of wall tension; however, due to the structure of the NTS surface, it does not shrink since the decrease of the NTS volume would increase its energy.
Introduction
Quantum field theory has been developed to predict the scattering probability of elementary particles. However, in the mid 1970s it was found out that this theory predicts one more class of stable compact objects: non-topological solitons (NTS). The NTS represents an unusual coherent state of matter, called also bulk matter. Models were suggested for the NTS to exist in forms of stars, quasars, the dark matter and nuclear matter.
A NTS configuration is the lowest energy solution of classical equations of motion possessing a spherical symmetry. Such a solution has been found for a rich variety of field Lagrangians. One can associate the conserved charge with global, local, Abelian and non-Abelian symmetry. It appears to be possible that the NTS configuration exists with both bosons as well as with fermions. In different models either one and the same field carries the charge and binds the NTS, or there are two different fields: charge carrier and binding field.
The spatial size of the NTS configuration may be elementary small or astronomically large, depending on the model fields and constants. The NTS size could increase with its energy until the gravitation complicates its behavior and finally causes the collapse. In some models, the NTS charge is bounded by the stability (or metastability) condition.
Simple examples
One field
For a complex scalar field with the U(1) invariant Lagrange density
the NTS is a ball with radius R filled with the field . Here is a constant inside the ball except for a thin surface coat where it sharply drops to the global U(1) symmetrical minimum of . The value is adjusted so that it minimises the energy of the configuration
Since the U(1) symmetry gives the conserved current
the ball possesses the conserved charge
The minimization of the energy (1) with R gives
The charge conservation allows the decay of the ball into Q particles exactly. This decay is energetically unprofitable if the sum mass Qm exceed the energy (2). Therefore, for the NTS existence it is necessary to have
The thin wall approximation, which was used above, allows to omit the gradient term in the expression for energy (1), since . This approximation is valid for and is justified by the exact solution of the motion equation.
Two fields
The NTS configuration for a couple of interacting scalar fields is sketched here.
The Lagrange density
is invariant under U(1) transformation of the complex scalar field Let this field depends on time and coordinate simply as . It carries the conserved charge . In order to check that the energy of the configuration is smaller than Qm, one should either to calculate this energy numerically or to use the variational method. For trial functions
and for r < R,
the energy in the large Q limit is approximately equal to
.
The minimization with R gives the upper estimation
for the energy of the exact solution of motion equations
and .
It is indeed smaller than for Q exceeding the crucial charge
Fermion plus scalar
If instead of boson, fermions carry the conserved charge, an NTS also exists. At this time one could take
N is the number of fermion species in the theory. Q can't exceed N due to the Pauli exclusive principle if the fermions are in the coherent state. This time the NTS energy E is bound by
See Friedberg/Lee.
Stability
Classical stability
The condition only allows to assert the NTS stability against a decay into free particles. The equation of motion gives only on a classical level. At least two things should be taken into account: (i) the decay into smaller pieces (fission) and (ii) the quantum correction for .
The condition of stability against the fission looks as follows:
It signifies that . This condition is satisfied for the NTS in examples 2.2 and 2.3. The NTS in example 2.1, called also Q-ball, is stable against the fission as well, even though the energy (2) does not satisfy (4): one has to recollect the omitted gradient surface energy and to add it to the Q-ball energy (1). Perturbatively, . Thus
Another job does, is to set for the thin-wall description of Q-ball: for small Q the surface becomes thicker, grows and kills the energy gain . However the formalism for the thick-wall approximation has been developed by Kusenko who says that for small Q, NTS also exists.
Quantum correction
As for quantum correction, it also diminishes the binding energy per charge for small NTS, making them unstable. The small NTS are especially important for the fermion case, since it is naturally to expect rather small number of fermions species N in (3), and consequently, Q. For Q=2 the quantum correction decreases the binding energy by 23%.
For Q=1 a calculation based on the path integral method has been carried out by Baacke.
The quantum energy has been derived as a time derivative of the one-loop fermion effective action
This calculation gives the loop energy of the order of binding energy.
In order to find the quantum correction following the canonical method of quantization, one has to solve the Schrödinger equation for the Hamiltonian built with quantum expansion of field functions. For the boson field NTS it reads
Here and are the solutions of the classical equation of motion, represents motion of the mass center, is the over-all phase, are the vibration coordinates, by analogy with the oscillator decomposition of photon field
For this calculation the smallness of four-interaction constant is essential, since the Hamiltonian is taken in the lowest order of that constant. The quantum decreasing of the binding energy increases the minimal charge making the NTS metastable between old and new values of this charge.
NTSs in some models become unstable as Q exceeds some stable charge . For example, NTS with fermions carrying a gauge charge has exceeding Qm for Q large enough as well as for small one. Besides, the gauged NTS probably is unstable against a classical decay without conservation of its charge due to complicated vacuum structure of the theory.
Generally, the NTS charge is limited by the gravitational collapse:
.
Particle emission
If one adds to the Q-ball Lagrange density an interaction with massless fermion
which is also U(1) invariant assuming the global charge for boson twice as for fermion, Q-ball once created begins to emit its charge with -pairs, predominantly from its surface. The evaporation rate per unit area .
The ball of trapped right-handed Majorana neutrinos in symmetric electroweak theory loses its charge (the number of trapped particles) through the neutrino-antineutrino annihilation by emitting photons from the whole volume.
The third example for a NTS metastable due to particle emission is the gauged non-Abelian NTS. The massive (outside the NTS) member of fermionic multiplet decays into a massless one and a gauged boson also massless in the NTS. Then the massless fermion carries away the charge since it does not interact at all with the Higgs field.
Three last examples represent a class for NTS metastable due to emission of particles which do not participate in the NTS construction. One more similar example: because of the Dirac mass term , right-handed neutrinos convert to left-handed ones. That happens at the surface of neutrino ball mentioned above. Left-handed neutrinos are very heavy inside the ball and they are massless outside it. So they go away carrying the energy and diminishing the number of particles inside. This "leakage" appears to be much slower than the annihilation onto photons.
Soliton-stars
Q-star
As the charge Q grows and E(Q) the order of , the gravitation becomes important for NTS. A proper name for such an object is a star. A boson-field Q-star looks like a big Q-ball. The way gravity changes E(Q) dependence is sketched here. It is the gravity what makes for Q-star — stabilize it against the fission.
Q-star with fermions has been described by Bahcall/Selipsky. Similar the NTS of Friedberg & Lee, the fermion field carrying a global conserved charge, interacts with a real scalar field.
The inside Q-star moves from a global maximum of the potential changing the mass of fermions and making them bound.
But this time Q is not the number of different fermion species but it is the large number of one and the same kind particles in the Fermi gas state. Then for the fermion field description one has to use instead of and the condition of pressure equilibrium instead of the Dirac equation for . Another unknown function is the scalar field profile which obeys the following motion equation : . Here is the scalar density of fermions, averaged on statistical ensemble:
Fermi energy of the fermion gas .
Neglecting the derivatives of for large Q, that equation together with the pressure equilibrium equation , constitute a simple system which gives and inside the NTS. They are constant since we have neglected the derivatives. The fermion pressure
For example, if and , then and . That means fermions appear to be massless in the NTS. Then the full fermion energy . For an NTS with the volume and the charge , its energy is proportional to the charge: .
The described above fermion Q-star has been considered as a model for neutron star in the effective hadron field theory.
Soliton star
If the scalar field potential has two degenerate or almost degenerate minima, one of them have to be the real (true) minimum in which we happen to leave. Inside NTS occupies another one. In such a model non-zero vacuum energy appears only at the NTS surface, not in its volume. This allows for the NTS to be very big without falling in gravitational collapse.
That is the case in the left-right symmetric electroweak theory. For a scale of symmetry breaking about 1 TeV, -ball of trapped right-handed massless neutrino might have the mass (energy) about 108 solar masses and was considered as a possible model for quasar.
For the degenerate potential
both boson and fermion soliton stars were investigated.
A complex scalar field could alone form the state of gravitational equilibrium possessing the astronomically large conserved number of particles. Such objects are called minisoliton stars because of their microscopic size.
Non-topological soliton with standard fields
Could a system of the Higgs field and some fermion field of the Standard model be in the state of Friedberg & Lee NTS ? That is more possible for a heavy fermion field: for a such one the energy gain would be the most because it does lose its large mass in the NTS interior, were the Yukawa term vanishes due to . The more so if the vacuum energy in the NTS interior is large, that would mean the large Higgs mass . The large fermion mass implies strong Yukawa coupling .
Calculation shows that the NTS solution is energetically favored over a plane wave (free particle) only if for even very small . For
=350 GeV (this is the point were for experimentally known 250 GeV) the coupling must be more than five.
The next question is whether or not multi-fermion NTS like a fermion Q-star is stable in the Standard model. If we restrict ourself by one fermion species, then the NTS has god the gauge charge. One can estimate the energy of gauged NTS as follows:
Here and are its radius and charge, the first term is the kinetic energy of the fermi-gas, the second is the Coulomb energy, takes into account the charge distribution inside the NTS and the latest one gives the volume vacuum energy. Minimization with gives the NTS energy as a function of its charge:
An NTS is stable if is smaller than the sum of masses for particles at infinite distance each from other. That is case for some , but such a dependence allows the fission for any .
Why could not quarks be bound in a hadron like in NTS. Friedberg and Lee investigated such a possibility. They assumed quarks getting huge masses from their interaction with a scalar field . Thus free quarks are heavy and escape from detection. The NTS built with quarks and fields demonstrate static properties of hadrons with 15% accuracy. That model demands SU(3) symmetry additional to the color one in order to preserve the later unbroken so that QCD gluons get large masses by SU(3) symmetry breaking outside hadrons and also avoid detection.
Nuclei have been considered as NTS's in the effective theory of strong interaction which is easier to deal with than QCD.
Solitonogenesis
Trapped particles
The way NTS's could be born by depends on whether or not the Universe carries a net charge. If it does not then NTS could be formed from random fluctuations of the charge. Those fluctuations grow up, disturb the vacuum and create NTS configurations.
If the net charge is present, i.e. charge asymmetry exists with a parameter , NTS could be simply born as the space became divided onto finite regions of true and false vacuum during the phase transition in the early Universe. Those occupied by the NTS (false) vacuum are almost ready NTSs. The scenario of the region formation depends on the phase transition order.
If the first order phase transition occurs, then nucleating bubbles of true vacuum grow and percolate, shrinking regions filled with the false vacuum. The later are preferable for charged particles to live in due to their smaller masses, so those regions become
NTSs.
In case of the second order phase transition as temperature drops below the crucial value the space consist of interconnecting regions of both false and true vacua with characteristic size . This interconnection "freezes out" as its rate becomes smaller than the expansion rate of the Universe at Ginzburg temperature , then the regions of two vacua percolate.
But if the false vacuum energy is large enough, on the plot, the false vacuum forms finite clusters (NTS's) surrounded by the percolated true vacuum.
The trapped charge stabilizes clusters against collapse.
In the second scenario of the NTS formation the number of born -charged NTS's per unit volume is simply the number density of clusters holding particles. Their number density is given
by , here b and c are constants of the order of unit, is the number of correlation volumes in a cluster of size . The number of particle in a cluster is
, here is the charge density in the universe at Ginzburg temperature. Thus big clusters are born very rarely and if the minimum stable charge is present, then overwhelming majority of born NTS carries .
For the following Lagrange density with biased discrete symmetry
with
and
it appears to be and
Field condensate
The net charge could be also placed in the complex scalar field condensate instead of free particles. This condensate could consist of spatially homogeneous and
provides its potential to be at minimum as the universe cools down and the temperature correction changes the form of the potential. Such a model was treated to explain the baryon asymmetry.
If the field potential allows Q-ball to exist, then they could be born from this condensate as the charge volume density drops in course of the universe expansion and becomes equal to Q-balls charge density.
As follows from the equation of motion for , this density changes with the expansion as the minus third power of scale factor for the expanding space-time with the differential length element .
Breaking the condensate onto Q-balls appears to be favorable over further dilution of the homogeneous charge density by expansion. The total charge in a comoving volume stays fixed of course.
The condensation of could occur at high temperature of the universe, due to the negative temperature correction to its mass: which provides with minimum its potential . Here the last term is induced by the interaction with additional field that has to be introduced in order to satisfy the Q-ball existence condition . At the temperature relevant to relevant Q-balls formation appears only through virtual process (loops) because it is heavy. An alternative way to satisfy the Q=ball existence condition is to appeal to the non-Abelian symmetry.
Further evolution
Once formed, the NTSs undergo complicated evolution, losing and acquiring the charge by interaction with each other and surrounding particles. Depending on theory parameters, they could either disappear at all or get statistical equilibrium and "freeze out" at some temperature of the universe, or be born "frozen out" if their interaction is slower than expansion rate at . In the first and the second cases, their up-to-date abundance (if any) has nothing to do with that at the moment of formation.
Since an NTS is a composite object, it has to demonstrate properties different from those of a single particle, e.g. evaporation emission, excitation levels, scattering form-factor. Cosmic observations of such phenomena could provide the unique information about the physics beyond the ability of accelerators.
See also
Fermi ball
Topological defect
References
Quantum field theory
Solitons | Non-topological soliton | [
"Physics"
] | 3,836 | [
"Quantum field theory",
"Quantum mechanics"
] |
22,775,557 | https://en.wikipedia.org/wiki/Liquid%20rheostat | A liquid rheostat or water rheostat or salt water rheostat is a type of variable resistor.
This may be used as a dummy load or as a starting resistor for large slip ring motors.
In the simplest form it consists of a tank containing brine or other electrolyte solution, in which electrodes are submerged to create an electrical load. The electrodes may be raised or lowered into the liquid to respectively increase or decrease the electrical resistance of the load. To stabilize the load, the mixture must not be allowed to boil.
Modern designs use stainless steel electrodes, and sodium carbonate, or other salts, and do not use the container as one electrode. In some designs the electrodes are fixed and the liquid is raised and lowered by an external cylinder or pump. Motor start systems used for frequent and rapid starts and re-starts, thus a high heat load to the rheostats, may include water circulation to external heat exchangers. In such cases anti-freeze and anti-corrosion additives must be carefully chosen to not change the resistance or support the growth of algae or bacteria.
The salt water rheostat operates at unity power factor and presents a resistance with negligible series inductance compared to a wire wound equivalent, and was widely used by generator assemblers, until 20 years ago, as a matter of course. They are still sometimes constructed on-site for the commissioning of large diesel generators in remote places, where discarded oil drums and scaffold tubes may form an improvised tank and electrodes.
Description
Typically a traditional liquid rheostat consists of a steel cylinder (the negative), about in size, standing on insulators, in which was suspended a hollow steel cylinder. This acted as the positive electrode and was supported by a steel rope and insulator from an adjustable pulley. The water pipe connection included an insulated section. The tank contained salt water, but not at the concentration that could be described as “brine”. The whole device was fenced off for safety.
Operation was very simple, as adding more salt, more water or varying the height of the centre electrode would vary the load. The load proved to be quite stable, varying only slightly as the water heated up, which never came to boil. Power dissipation was about 1 megawatt, at a potential of about 700 volts and current of about 1,500 amperes.
Modern designs use stainless steel electrodes, and sodium carbonate, or other salts, and do not use the container as one electrode.
Systems with frequent starting may include water circulation to external heat exchangers. In such cases anti-freeze and anti-corrosion additives must be carefully chosen to not change the resistance or support the growth of algae or bacteria.
Advantages and disadvantages
An advantage is silent operation, with none of the fan noise of current resistive grid designs.
Disadvantages include:
corrosion to the copper connection cables and to the wire rope
lack of insulation from ground which may trip a ground detection system
Uses
Railways commonly used salt water load banks in the 1950s to test the output power of diesel-electric locomotives. They were subsequently replaced by specially designed resistive load banks. Some early three-phase AC electric locomotives also used liquid rheostats for starting up the motors and balancing load between multiple locomotives.
Liquid rheostats were sometimes used in large (thousands of kilowatts/horsepower) wound rotor motor drives, to control the rotor circuit resistance and so the speed of the motor. Electrode position could be adjusted with a small electrically operated winch or a pneumatic cylinder. A cooling pump and heat exchanger were provided to allow slip energy to be dissipated into process water or other water system.
Massive rheostats were once used for dimming theatrical lighting, but solid-state components have taken their place in most high-wattage applications.
Current use
High voltage distribution networks use fixed electrolyte resistors to ground the neutral, to provide a current limiting action, so that the voltage across the ground during fault is kept to a safe level. Unlike a solid resistor, the liquid resistor is self healing in the event of overload. Normally the resistance is set up during commissioning, and then left fixed.
Modern motor starters are totally enclosed and the electrode movement is servo motor controlled. Typically a 1 tonne tank will start a 1 megawatt slip ring type motor, but there is considerable variation in start time depending on application.
Safety issues with older designs
The fully salt-water load bank dates from an earlier, less regulated and litigious era. To pass current safety legislation, a more enclosed design is required.
They are no more dangerous than electrode heaters, which work on the same principle, but with plain water, or electrical immersion heaters, provided the correct precautions are used. This requires connecting the container to both ground and neutral and breaking all poles with a linked over-current circuit breaker. If in the open, safety barriers are required.
See also
Liquid resistor
Electrode boiler
BS 7671
References
Electric power
Resistive components
Nondestructive testing
Electrochemistry | Liquid rheostat | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,044 | [
"Physical quantities",
"Resistive components",
"Power (physics)",
"Electrochemistry",
"Materials testing",
"Electric power",
"Nondestructive testing",
"Electrical engineering",
"Electrical resistance and conductance"
] |
27,567,102 | https://en.wikipedia.org/wiki/Matrix%20difference%20equation | A matrix difference equation is a difference equation in which the value of a vector (or sometimes, a matrix) of variables at one point in time is related to its own value at one or more previous points in time, using matrices. The order of the equation is the maximum time gap between any two indicated values of the variable vector. For example,
is an example of a second-order matrix difference equation, in which is an vector of variables and and are matrices. This equation is homogeneous because there is no vector constant term added to the end of the equation. The same equation might also be written as
or as
The most commonly encountered matrix difference equations are first-order.
Nonhomogeneous first-order case and the steady state
An example of a nonhomogeneous first-order matrix difference equation is
with additive constant vector . The steady state of this system is a value of the vector which, if reached, would not be deviated from subsequently. is found by setting in the difference equation and solving for to obtain
where is the identity matrix, and where it is assumed that is invertible. Then the nonhomogeneous equation can be rewritten in homogeneous form in terms of deviations from the steady state:
Stability of the first-order case
The first-order matrix difference equation is stable—that is, converges asymptotically to the steady state —if and only if all eigenvalues of the transition matrix (whether real or complex) have an absolute value which is less than 1.
Solution of the first-order case
Assume that the equation has been put in the homogeneous form . Then we can iterate and substitute repeatedly from the initial condition , which is the initial value of the vector and which must be known in order to find the solution:
and so forth, so that by mathematical induction the solution in terms of is
Further, if is diagonalizable, we can rewrite in terms of its eigenvalues and eigenvectors, giving the solution as
where is an matrix whose columns are the eigenvectors of (assuming the eigenvalues are all distinct) and is an diagonal matrix whose diagonal elements are the eigenvalues of . This solution motivates the above stability result: shrinks to the zero matrix over time if and only if the eigenvalues of are all less than unity in absolute value.
Extracting the dynamics of a single scalar variable from a first-order matrix system
Starting from the -dimensional system , we can extract the dynamics of one of the state variables, say . The above solution equation for shows that the solution for is in terms of the eigenvalues of . Therefore the equation describing the evolution of by itself must have a solution involving those same eigenvalues. This description intuitively motivates the equation of evolution of , which is
where the parameters are from the characteristic equation of the matrix :
Thus each individual scalar variable of an -dimensional first-order linear system evolves according to a univariate th-degree difference equation, which has the same stability property (stable or unstable) as does the matrix difference equation.
Solution and stability of higher-order cases
Matrix difference equations of higher order—that is, with a time lag longer than one period—can be solved, and their stability analyzed, by converting them into first-order form using a block matrix (matrix of matrices). For example, suppose we have the second-order equation
with the variable vector being and and being . This can be stacked in the form
where is the identity matrix and is the zero matrix. Then denoting the stacked vector of current and once-lagged variables as and the block matrix as , we have as before the solution
Also as before, this stacked equation, and thus the original second-order equation, are stable if and only if all eigenvalues of the matrix are smaller than unity in absolute value.
Nonlinear matrix difference equations: Riccati equations
In linear-quadratic-Gaussian control, there arises a nonlinear matrix equation for the reverse evolution of a current-and-future-cost matrix, denoted below as . This equation is called a discrete dynamic Riccati equation, and it arises when a variable vector evolving according to a linear matrix difference equation is controlled by manipulating an exogenous vector in order to optimize a quadratic cost function. This Riccati equation assumes the following, or a similar, form:
where , , and are , is , is , is the number of elements in the vector to be controlled, and is the number of elements in the control vector. The parameter matrices and are from the linear equation, and the parameter matrices and are from the quadratic cost function. See here for details.
In general this equation cannot be solved analytically for in terms of ; rather, the sequence of values for is found by iterating the Riccati equation. However, it has been shown that this Riccati equation can be solved analytically if and , by reducing it to a scalar rational difference equation; moreover, for any and if the transition matrix is nonsingular then the Riccati equation can be solved analytically in terms of the eigenvalues of a matrix, although these may need to be found numerically.
In most contexts the evolution of backwards through time is stable, meaning that converges to a particular fixed matrix which may be irrational even if all the other matrices are rational. See also .
A related Riccati equation is
in which the matrices are all . This equation can be solved explicitly. Suppose which certainly holds for with and with . Then using this in the difference equation yields
so by induction the form holds for all . Then the evolution of and can be written as
Thus by induction
See also
Matrix differential equation
Difference equation
Linear difference equation
Dynamical system
Algebraic Riccati equation
References
Linear algebra
Matrices
Recurrence relations
Dynamical systems | Matrix difference equation | [
"Physics",
"Mathematics"
] | 1,200 | [
"Recurrence relations",
"Mathematical objects",
"Matrices (mathematics)",
"Mathematical relations",
"Mechanics",
"Linear algebra",
"Algebra",
"Dynamical systems"
] |
27,569,419 | https://en.wikipedia.org/wiki/Stream%20competency | In hydrology stream competency, also known as stream competence, is a measure of the maximum size of particles a stream can transport. The particles are made up of grain sizes ranging from large to small and include boulders, rocks, pebbles, sand, silt, and clay. These particles make up the bed load of the stream. Stream competence was originally simplified by the “sixth-power-law,” which states the mass of a particle that can be moved is proportional to the velocity of the river raised to the sixth power. This refers to the stream bed velocity which is difficult to measure or estimate due to the many factors that cause slight variances in stream velocities.
Stream capacity, while linked to stream competency through velocity, is the total quantity of sediment a stream can carry. Total quantity includes dissolved, suspended, saltation and bed loads.
The movement of sediment is called sediment transport. Initiation of motion involves mass, force, friction and stress. Gravity and friction are the two primary forces in play as water flows through a channel. Gravity acts upon water to move it down slope. Friction exerted on the water by the bed and banks of the channel works to slow the movement of the water. When the force of gravity is equal and opposite to the force of friction the water flows through the channel at a constant velocity. When the force of gravity is greater than the force of friction the water accelerates.
This sediment transport sorts grain sizes based on the velocity. As stream competence increases, the D50 (median grain size) of the stream also increases and can be used to estimate the magnitude of flow which would begin particle transport. Stream competence tends to decrease in the downstream direction, meaning the D50 will increase from mouth to head of the stream.
Importance of Velocity
Stream Power
Stream power is the rate of potential energy loss per unit of channel length. This potential energy is lost moving particles along the stream bed.
where is the stream power, is the density of water, is the gravitational acceleration, is the channel slope, and is the discharge of the stream.
The discharge of a stream, , is the velocity of the stream, , multiplied by the cross-sectional area, , of the stream channel at that point:
in which is the discharge of the stream, is the average stream velocity, and is the cross-sectional area of the stream.
As velocity increases, so does stream power, and a larger stream power corresponds to an increased ability to move bed load particles.
Shear Stress and Critical Shear Stress
In order for sediment transport to occur in gravel bed channels, flow strength must exceed a critical threshold, called the critical threshold of entrainment, or threshold of mobility. Flow over the surface of a channel and floodplain creates a boundary shear stress field. As discharge increases, shear stress increases above a threshold and starts the process of sediment transport. A comparison of the flow strength available during a given discharge to the critical shear strength needed to mobilize the sediment on the bed of the channel helps us predict whether or not sediment transport is likely to occur, and to some degree, the sediment size likely to move. Although sediment transport in natural rivers varies wildly, relatively simple approximations based on simple flume experiments are commonly used to predict transport. Another way to estimate stream competency is to use the following equation for critical shear stress, which is the amount of shear stress required to move a particle of a certain diameter.
where:
Shields parameter, a dimensionless value which describes the resistance of the stream bed to gravitational acceleration, also described as roughness or friction,
Particle density, and is the effective density of the particle when submerged in water (Archimedes principle).
Gravitational acceleration.
grain diameter, usually measured as d50 which is the median particle diameter when sampling particle diameters in a stream transect.
The shear stress of a stream is represented by the following equation:
where:
average depth
stream slope.
If we combine the two equations we get:
Solving for particle diameter d we get
The equation shows particle diameter, , is directly proportional to both the depth of water and slope of stream bed (flow and velocity), and inversely proportional to Shield's parameter and the effective density of the particle.
Lift
Velocity differences between the bottom and tops of particles can lead to lift. Water is allowed to flow above the particle but not below resulting in a zero and non-zero velocity at the bottom and top of the particle respectively. The difference in velocities results in a pressure gradient that imparts a lifting force on the particle. If this force is greater than the particle's weight, it will begin transport.
Turbulence
Flows are characterized as either laminar or turbulent. Low-velocity and high-viscosity fluids are associated with laminar flow, while high-velocity and low-viscosity are associated with turbulent flows. Turbulent flows result velocities that vary in both magnitude and direction. These erratic flows help keep particles suspended for longer periods of time. Most natural channels are considered to have turbulent flow.
Other influencing factors
Cohesion
Another important property comes into play when discussing stream competency, and that is the intrinsic quality of the material. In 1935 Filip Hjulström published his curve, which takes into account the cohesiveness of clay and some silt. This diagram illustrates stream competency as a function of velocity.
By observing the size of boulders, rocks, pebbles, sand, silt, and clay in and around streams, one can understand the forces at work shaping the landscape. Ultimately these forces are determined by the amount of precipitation, the drainage density, relief ratio and sediment parent material. They shape depth and slope of the stream, velocity and discharge, channel and floodplain, and determine the amount and kind of sediment observed. This is how the power of water moves and shapes the landscape through erosion, transport, and deposition, and it can be understood by observing stream competency.
Bedrock
Stream competence does not rely solely on velocity. The bedrock of the stream influences the stream competence. Differences in bedrock will affect the general slope and particle sizes in the channel. Stream beds that have sandstone bedrock tend to have steeper slopes and larger bed material, while shale and limestone stream beds tend to be shallower with smaller grain size. Slight variations in underlying material will affect erosion rates, cohesion, and soil composition.
Vegetation
Vegetation has a known impact on a stream's flow, but its influence is hard to isolate. A disruption in flow will result in lower velocities, leading to a lower stream competence. Vegetation has a 4-fold effect on stream flow: resistance to flow, bank strength, nucleus for bar sedimentation, and construction and breaching of log-jams.
Resistance to flow
Cowan method for estimating Manning's n.
Manning's n considers a vegetation correction factor. Even stream beds with minimal vegetation will have flow resistance.
Bank strength
Vegetation growing in the stream bed and channel helps bind sediment and reduce erosion in a stream bed. A high root density will result in a reinforced stream channel.
Nucleus for Bar Sedimentation
Vegetation-sediment interaction. Vegetation that gets caught in the middle of a stream will disrupt flow and lead to sedimentation in the resulting low velocity eddies. As the sedimentation continues, the island grows, and flow is further impacted.
Construction and Breaching of Log-jams
Vegetation-vegetation interaction. Build-up of vegetation carried by streams eventually cuts off-flow completely to side or main channels of a stream. When these channels are closed, or opened in the case of a breach, the flow characteristics of the stream are disrupted.
References
Hydrology | Stream competency | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,547 | [
"Hydrology",
"Environmental engineering"
] |
35,591,174 | https://en.wikipedia.org/wiki/Bernhard%20Schrader | Bernhard Schrader (15 March 1931 – 8 January 2012) was a German professor of Theoretical and Physical Chemistry and teaching until his retirement in 1996 at the University of Essen, where he died. Schrader was an internationally acclaimed pioneer of experimental molecular spectroscopy in Germany, especially of Raman- and Infrared spectroscopy and its routine application in chemical analysis. Amongst his numerous achievements was his historic landmark paper with Bergmann of 1967 about the first successful use of Transmission Raman spectroscopy for chemical analysis of Organic solids, e.g. pharmaceutical powders, which has become routine industry practice since that approach was "rediscovered" in 2006.
Life
Schrader was born in Quedlinburg and studied chemistry at Technische Universität Berlin. He completed his studies in 1960 with his dissertation, supervised by Friedrich Nerdel (whose own Ph.D. advisor had been Walter Hückel) and his assistant Günther Kresze, who later became professor of organic chemistry at the Technical University of Munich. During that time he wrote, in cooperation with his Ph.D. advisor Friedrich Nerdel, the first edition of his bestselling text book "Lehrbuch der Organischen Chemie" ("Textbook of Organic Chemistry"), which was later known as Bernhard Schrader - "Kurzes Lehrbuch der Organischen Chemie" ("Brief Textbook of Organic Chemistry" 1.-3. edition 1979-1985-2009).
In 1962 Schrader joined the "Institute for Spectrochemistry and Applied Spectroscopy" (ISAS) in Dortmund, at that time led by the physicist Heinrich Kaiser, where he built up and led the molecular spectroscopy department. In 1966 Schrader worked as a post-doc at Florida State University in Tallahassee, in the research group of Earle K. Plyler, at that time one of the leading molecular spectroscopists in the USA.
From 1971 until 1976, Schrader was Professor for Theoretical Organic Chemistry at the University of Dortmund, in 1976 he accepted a tenure as professor for physical and theoretical chemistry at the University of Essen, which he held until his retirement in 1996.
In 1981 Schrader was "visiting scientist" at IBM Research Lab in San Jose, California, in 1984/85 he was a guest professor at Weizmann Institute of Science in Rehovot, Israel.
Inheritance
Schrader published over 300 scientific papers in peer referred journals, and, besides the classic works listed below, two stencils for drawing Stereochemical formulae in 2D and 3D and mathematical formulae. In 1975 four movies were produced with the Institute for Scientific Motion Pictures (IWF) in Göttingen, which demonstrated various types of molecular vibrations.
Schrader was supervisor of 63 doctoral theses, and besides numerous Diploma and Ph.D. candidates from all over the world he hosted five scholars of Alexander von Humboldt Foundation.
For his scientific achievements and also his intense personal involvement on behalf of international scientific cooperation especially with colleagues from Eastern and Southeastern Europe and Turkey Schrader received various awards and honors in Germany and abroad, besides others Schrader was Honorary Member of the Turkish Chemical Society and Member of the scientific-technical class of the Norwegian Academy of Science and Letters.
Books
Kurzes Lehrbuch der Organischen Chemie, 3. Auflage 2010, De Gruyter,
Raman-IR-Atlas of Organic Compounds, 1974, Wiley-VCH,
Infrared and Raman Spectroscopy. Methods and Applications, 1995, Wiley-VCH,
Movies
published by: IWF Göttingen, 1975 ( see: filmarchives-online.eu Search for = “Bernhard Schrader” )
Vibrations of Free Molecules - 1. Stretching and Bending Vibrations in Ethylene
Schwingungen freier Moleküle - 2. Schwingungsformen der Methylgruppe in Propen
Schwingungen freier Moleküle - 3. Schwingungsformen aromatischer Ringe in Melamin
Oscillations of Molecules in Melamine Crystal Lattices with Hydrogen Bonds
References
Korte, H., Takahashi, H., 2003, Biography of Bernhard Schrader, Journal of Molecular Structure, Volume 661–662, pp. 1–2
In Memoriam: Bernhard Schrader (1931-2012), Applied Spectroscopy News, Applied Spectroscopy, Volume 66, Number 5, 2012, p. 143A
German organic chemists
People from Quedlinburg
Scientists from the Province of Saxony
German physical chemists
Theoretical chemists
Technische Universität Berlin alumni
1931 births
2012 deaths
Members of the Norwegian Academy of Science and Letters | Bernhard Schrader | [
"Chemistry"
] | 949 | [
"Quantum chemistry",
"Physical chemists",
"Organic chemists",
"Theoretical chemistry",
"German organic chemists",
"Theoretical chemists"
] |
35,592,512 | https://en.wikipedia.org/wiki/Integrated%20vehicle%20health%20management | Integrated vehicle health management (IVHM) or integrated system health management (ISHM) is the unified capability of systems to assess the current or future state of the member system health and integrate that picture of system health within a framework of available resources and operational demand.
Aims of IVHM
The aims of IVHM are to enable better management of vehicle and vehicle fleet health.
Improve safety through use of diagnostics and prognostics to fix faults before they are an issue.
Improve availability through better maintenance scheduling
Improve reliability through a more thorough understanding of the current health of the system and prognosis based maintenance
Reduce total cost of maintenance through reduction of unnecessary maintenance and avoidance of unscheduled maintenance
This is achieved through correct use of reliable sensing and prognosis systems to monitor part health and also using usage data to assist in understanding the load experienced and likely future vehicle load.
History
Origins
It has been suggested that IVHM as a named concept has been around since the 1970s.
However, there does not seem to be much in the way of written evidence of this. IVHM as a concept grew out of popular aviation maintenance methods. It was a natural next step from condition based maintenance. As sensors improved and our understanding of the systems concerned grew, it became possible to not just detect failure but also to predict it. The high unit cost & high maintenance cost of aircraft & spacecraft made any advance in maintenance methods very attractive.
NASA was one of the first organisations to use the name IVHM to describe how they wanted to approach maintenance of spacecraft in the future. They created NASA-CR-192656, in 1992 with the assistance of the General Research Corporation and the Orbital Technologies Corporation. This was a goals & objectives document in which they discussed the technology and maintenance concepts that they believed would be necessary to enhance safety while reducing maintenance costs in their next generation vehicles. Many companies since then have become interested in IVHM and body of literature has increased substantially. There are now IVHM solutions for many different types of vehicle from the JSF to commercial haulage vehicles.
First space prognostics
The first published history of predicting spacecraft equipment failures occurred on the 12 Rockwell/U.S. Air Force Global Positioning System Block I (Phase 1) satellites using non-repeatable transient events (NRTE) and GPS Kalman filter data from the GPS Master Control Station, between 1978 and 1984 by the GPS Space and Ground Segment Manager. NRTEs were isolated to the GPS satellites after mission operations support personnel replayed the real-time satellite telemetry ruling out RF and land-line noise caused from poor Eb/No or S/N and data acquisition and display system processing problems. The GPS satellite's subsystem equipment vendors diagnosed the NRTEs as systemic noise that preceded the equipment failures because at the time, it was believed that all equipment failures occurred instantaneously and randomly and so equipment failures could not be predicted (e.g. equipment failures exhibited memoryless behavior). Rockwell International GPS Systems Engineering Manager ordered a stop to predicting GPS satellite equipment failures in 1983 claiming it wasn't possible and the company was not on contract to do so. The prognostic analysis that was completed on the GPS satellite telemetry was published quarterly contractually as a CDRL to the GPS Program Office personnel and a wide variety of Air Force subcontractors working on the GPS program.
Further development
One of the key milestones in the creation of IVHM for aircraft was the series of ARINC standards that enabled different manufacturers to create equipment that would work together and be able to send diagnostic data from the aircraft to the maintenance organisation on the ground. ACARS is frequently used to communicate maintenance and operational data between the flight crew and the ground crew. This has led to concepts which have been adopted in IVHM.
Another milestone was the creation of health and usage monitoring systems(HUMS) for helicopters operating in support of the Oil rigs in the North Sea. This is key concept that usage data can be used to assist maintenance planning.
FOQA or Flight Data systems are similar to HUMS as they monitor the vehicle usage. They are useful for IVHM in the same way as they allow the usage of the vehicle to be thoroughly understood which aids in the design of future vehicles. It also allows excessive loads and usage to be identified and corrected. For example, if an aircraft was experiencing frequent heavy landings the maintenance schedule for the undercarriage could be changed to ensure that they are not wearing too fast under the increased load. The load carried by the aircraft could be lessened in future or operators could be given additional training to improve the quality of the landings.
The growing nature of this field led Boeing to set up an IVHM centre with Cranfield University in 2008 to act as a world leading research hub. The IVHM centre has since then offered the world's first IVHM Msc course and hosts several PhD students researching the application of IVHM to different fields.
Philosophy
IVHM is concerned not just with the current condition of the vehicle but also with health across its whole life cycle. IVHM examines the vehicle health against the vehicle usage data and within the context of similar information for other vehicles within the fleet. In use vehicles display unique usage characteristics and also some characteristics common across the fleet. Where usage data and system health data is available these can be analysed to identify these characteristics. This is useful in the
Identification of problems unique to one vehicle as well as identifying trends in vehicle degradation across the entire fleet.
IVHM is a concept for the complete maintenance life cycle of a vehicle (or machine plant installation). It makes extensive use of embedded sensors and self-monitoring equipment combined with prognostics and diagnostic reasoning. In the case of vehicles it is typical for there to be a data acquisition module on-board and a diagnostic unit. Some vehicles can transfer selected data back to base while in use through various rf systems. Whenever the vehicle is at base the data is also transferred to a set of maintenance computers that also process that data for a deeper understanding of the true health of the vehicle. The usage of the vehicle can also be matched to the degradation of parts and improve the prognostics prediction accuracy.
The remaining useful life is used to plan replacement or repair of the part at some convenient time prior to failure. The inconvenience of taking the vehicle out of service is balanced against the cost of unscheduled maintenance to ensure that the part is replaced at the optimum point prior to failure. This process has been compared to the process of choosing when to buy financial options as the cost of scheduled maintenance must be balanced against the risk of failure and the cost of unscheduled maintenance.
This differs from Condition-based maintenance(CBM) where the part is replaced once it has failed or once a threshold is passed. This often involves taking the vehicle out of service with at an inconvenient time when it could be generating revenue. It is preferable to use an IVHM approach to replace it at the most convenient time. This allows the reduction in waste component life caused by replacing the part too early and also reducing cost incurred by unscheduled maintenance. This is possible due to the increased prognostic distance provided by an IVHM solution. There are many technologies that are used in IVHM. The field itself is still growing and many techniques are still being added to the body of knowledge.
Architecture
Health monitoring sensors are designed into the vehicle and report to a data processing unit. Some of the data may be manipulated on board for immediate system diagnosis and prognosis. Less time critical data is processed off board. All the historical data for the vehicle can be compared with current performance to identify degradation trends at a more detailed level than could be done on board the vehicle. This is all used to improve reliability and availability and the data is also fed back to the manufacturer for them to improve their product.
A standard architecture for IVHM has been proposed as the OSA-CBM standard which gives a structure for data gathering, analysis and action. This is intended to facilitate interoperability between IVHM systems of different suppliers.
The key parts within OSA-CBM are
Data acquisition (DA)
Data manipulation (DM)
State detection (SD)
Health assessment (HA)
Prognosis assessment (PA)
Advisory generation (AG)
These are laid out within ISO 13374
The system is not intended to replace safety critical warnings such as an aircraft's flight management system but instead to complement them and perhaps also leverage existing sensors for assistance with system health monitoring. Ideal systems to monitor are those systems, subsystems & structural elements which are likely to show graceful degradation so that they can be repaired or replaced at a convenient time prior to failure. This gives a saving over condition based maintenance as once a part has failed often a vehicle cannot be used until repaired. This often results in scheduling difficulties if the vehicle fails when it was needed for revenue generation and cannot be used. In contrast IVHM can be used to replace the part during vehicle downtime prior to failure. This ensures that it can continue to generate revenue as scheduled.
Communications between the vehicle and the maintenance organisation are crucial to fixing faults in a timely manner. The balance of how much data should be sent to the maintainer during use and how much should be downloaded while in maintenance is a one that must be judged carefully. One example of this is what is known as fault forwarding. When an aircraft experiences a fault the flight management system reports it to the flight crew but also sends a message through ACARS to the maintenance team so that they can start their maintenance planning before the aircraft has landed. This yields a time advantage as they know some of the parts and personnel required to fix the fault before the aircraft has landed. However the communication link does cost money and has a limited bandwidth so the worth of this health & usage data must be judged carefully with consideration given as to whether it should be transmitted or merely downloaded during the next maintenance or as part of the operator shutdown process.
References
See also
Digital twin
health and usage monitoring systems
Maintenance
Logistics | Integrated vehicle health management | [
"Engineering"
] | 2,037 | [
"Maintenance",
"Mechanical engineering"
] |
35,597,206 | https://en.wikipedia.org/wiki/Mesoionic%20carbene | In chemistry, mesoionic carbenes (MICs) are a type of reactive intermediate that are related to N-heterocyclic carbenes (NHCs); thus, MICs are also referred to as abnormal N-heterocyclic carbenes (aNHCs) or remote N-heterocyclic carbenes (rNHCs). Unlike simple NHCs, the canonical resonance structures of these carbenes are mesoionic: an MIC cannot be drawn without adding additional charges to some of the atoms.
A variety of free carbenes can be isolated and are stable at room temperature. Other free carbenes are not stable and are susceptible to intermolecular decomposition pathways. MICs do not dimerize according to Wanzlick equilibrium as do normal NHCs. This results in relaxed steric requirements for mesoionic carbenes as compared to NHCs.
There are several mesoionic carbenes that cannot be generated as free compounds, but can be synthesized as a ligand in a transition metal complex. Most MIC transition metal complexes are less sensitive to air and moisture than phosphine or normal NHC complexes. They are also resistant to oxidation. The robust nature of MIC complexes is due to the ligand’s strong σ-donating ability. They are stronger σ-donors than phosphines, as well as normal N-heterocyclic carbenes due to decreased heteroatom stabilization. The strength of carbene ligands is attributed to the electropositive carbon center that forms strong bonds of a covalent nature with the metal. They have been shown to lower the frequency of CO stretching vibrations in metal complexes and exhibit large trans effects.
Classes
Imidazolin-4-ylidenes
The most studied mesoionic carbenes are based on imidazole and are referred to as imidazolin-4-ylidenes. These complexes were first reported by Crabtree in 2001. The formation of imidazolin-4-ylidenes (MIC) instead of imidazolin-2-ylidenes (NHC) is typically a matter of blocking the C2 position. Most imidazolin-4-ylidenes are trisubstituted in the N1, C2, and N3 positions or tetrasubstituted. Electron-withdrawing groups in the N3 and C5 positions stabilize the carbenes more than electron-donating groups. Free carbenes as well as numerous transition metal complexes have been synthesized.
1,2,3-triazolylidenes
Also well studied are the mesoionic carbenes based on 1,2,3-triazole, referred to as 1,2,3-triazol-4(or 5)-ylidenes. The first triazolylidenes were reported by Albrecht in 2008. These carbenes are typically trisubstituted with alkyl groups in the N1 and N3 positions and an aryl group in the C4 or C5 position. Free carbenes as well as numerous transition metal complexes have been reported. Free carbenes that are alkylated at N3 tend to undergo decomposition reactions in which the alkyl group participates in a nucleophilic attack at the carbene position. If N3 is substituted with a bulky alkyl group or an aryl group, the stability of the carbene increases significantly.
Pyrazolinylidenes
The first mesoionic carbenes based on pyrazole have been reported by Huynh in 2007. These carbenes are referred to as pyrazolin-3(or 4)-ylidenes. Pyrazolin-4-ylidenes are often tetrasubstituted with alkyl or aryl groups; however, the C3 and C5 positions could be substituted with nitrogen- or oxygen-based groups. The electronic properties of the groups in the C3 and C5 positions affect the overall electron properties of the ligand and influence catalytic activity. Free carbene have been produced as well as transition metal complexes.
Others
Examples of tetrazol-5-ylidenes based on tetrazole have been prepared by Araki. The N1 and N3 positions are substituted with alkyl or aryl groups. Transition metal complexes of these carbenes have been generated in situ. Mesoionic carbenes based on isoxazole and thiazole have been reported by Albrecht and Bertrand respectively. The isoxazol-4-ylidenes are trisubstituted in the N2, C3, and C5 positions with alkyl groups. The thiazol-5-ylidenes are trisubstituted in the C2, N3, and C4 positions with aryl groups. Transition metal complexes of both types of carbenes have been generated in situ. Bertrand also reported a 1,3-dithiol-5-ylidene based on 1,3-dithiolane, but it can only be isolated as a transition metal complex.
Synthesis of free carbenes
Many free mesoionic carbenes are synthesized from their protonated salt form by deprotonation using strong potassium bases, such as potassium bis(trimethylsilyl)amide (KHMDS) or potassium tert-butoxide (KOt-Bu). Potassium bases are used because they do not form stable carbene-alkali metal adducts.
Imidazolin-4-ylidenes (MIC) would form rather than imidazolin-2-ylidenes (NHC) due to blocking the C2 position. The C2 carbenes are thermodynamically more stable than their C4 counterparts due to resonance and inductive carbon-nitrogen interactions. Also, calculations show that the C4 hydrogen is less acidic than the C2 hydrogen of imidazole. This data suggests that the C2 position should be activated preferentially to the C4 position unless the C2 position is blocked. Aryl and bulky alkyl groups (such as isopropyl) are good at blocking the C2 position from being activated.
Carbene metal complexes
Many mesoionic carbenes may not be able to be isolated as a free carbene; however, these MICs can be generated as a ligand for transition metal complexes. Numerous mesoionic carbene transition metal complexes are known with metals including Fe, Os, Rh, Ir, Ni, Pd, Pt, Cu, and Ag. Metal complexes with Sm and Y are also known. MIC complexes are formed by a variety of mechanisms.
Mesoionic carbenes may be generated in situ with addition of a strong base to their salt forms. The carbenes immediately form complexes with metals present in the reaction mixture through ligand exchange.
Direct metalation through C-H bond activation or C-H oxidative addition is one method often utilized. Activation of a C‒H bond leads to oxidative addition of the carbene ligand to the metal center. Typically, direct metalation requires the blockage of sites that would lead to normal NHC complexes — phenyl and isopropyl groups are good blocking substituents, as discussed earlier. Smaller substituents may be cleaved. Direct metalation by silver(I) with imidazolium salts can cause cleavage at the C2 position if methyl is used as the blocking group. The result is formation of normal NHC carbenes. n-alkyl and benzyl groups may undergo the same fate as the methyl group. Steric bulk may also influence the formation of MIC complexes over NHC complexes. For imidazolium salts, the C2 position may not need to be blocked if the nitrogen substituents (N1 or N3) are sterically-demanding. Interactions between the nitrogen substituents and the metal center prevent normal NHC complexes from forming. If the carbene is part of a bidentate ligand with a forced geometry, the MIC complex may form preferentially as well. The counteranion of imidazolium salts participates in NHC vs. MIC formation. NHC formation typically occurs by heterolytic bond cleavage, so small, coordinating anions favor this pathway. MIC formation typically occurs by an oxidative addition pathway, so non-coordinating and apolar anions are preferred, such as BF4− or SbF6−. Other techniques focus on the activation of the desired carbon rather than blocking undesired carbons. A carbon may be activated by a halogen. A C-X bond (X = halide) is more favorable for activation than a C-H bond. This pathway results in the oxidative addition of the MIC carbene halide to a low valent metal center.
Transmetalation is another method commonly utilized. Typically, a silver carbene complex is produced by direct metalation. This silver complex is reacted via transmetalation with a salt of the desired metal. The metal MIC complex is produced and silver salts generally precipitate.
Applications in catalysis
Since mesoionic carbene ligands are very strong σ-donors and make it easier for a metal center to undergo oxidative addition, MIC ligands have the potential to be useful in catalysis. MIC transition metal complexes have been tested as catalysts in olefin metathesis, ring closure metathesis, and ring opening polymerization metathesis. The MIC complexes work very well, and in many cases, they outperform their NHC counterparts. MIC complexes have been successful as catalysts for Suzuki-Miyaura and Heck-Mizoroki cross-coupling reactions. Again, in many cases, MIC catalysts are superior to their NHC counterparts. For example, in olefin metathesis, MIC catalysts are active at room temperature after simply addition of a Brønsted acid, such as hydrochloric acid or trifluoroacetic acid, compared to the large amount of thermal activation required for NHC catalysts. MIC complexes have found use as catalysts in olefin hydrogenation. They have been shown to hydrogenate terminal and cis-alkenes. They work better than their NHC counterparts due to the MIC ligand’s stronger electron-donating properties. They are better able to provide electron density to promote hydrogen gas oxidative addition to the metal. MIC complexes have been used in transfer hydrogenation reactions. For example, they have been used to hydrogenate a diaryl ketone using isopropanol as a hydrogen source., MIC complexes are being considered as green chemistry catalysts. They act as catalysts for base- and oxidant-free oxidation of alcohols and amines. Some complexes have also been shown to synthesize certain aryl amides. Other MIC complexes have been used in hydroarylation, involving the addition of an electron-rich aryl group and a hydrogen across a multiple bond. The reactions that mesoionic carbene complexes catalyze will continue to expand as more research is done.
References
Carbenes
Organometallic chemistry
Ligands | Mesoionic carbene | [
"Chemistry"
] | 2,343 | [
"Ligands",
"Inorganic compounds",
"Coordination chemistry",
"Organic compounds",
"Carbenes",
"Organometallic chemistry"
] |
28,844,729 | https://en.wikipedia.org/wiki/Plague%20doctor%20costume | The clothing worn by plague doctors was intended to protect them from airborne diseases during outbreaks of bubonic plague in Europe. It is often seen as a symbol of death and disease. Contrary to popular belief, no evidence suggests that the beak mask costume was worn during the Black Death or the Middle Ages. The costume started to appear in the 17th century when physicians studied and treated plague patients.
Description
The costume consists of a leather hat, mask with glass eyes and a beak, stick to remove clothes of a plague victim, gloves, waxed linen robe, and boots.
The typical mask had glass openings for the eyes and a curved beak shaped like a bird's beak with straps that held the beak in front of the doctor's nose. The mask had two small nose holes and was a type of respirator that contained aromatic items. The beak could hold dried flowers (commonly roses and carnations), herbs (commonly lavender and peppermint), camphor, or a vinegar sponge, as well as juniper berry, ambergris, cloves, labdanum, myrrh, and storax. The purpose of the mask was to keep away bad smells, such as the smell of decaying bodies. The smell taken with the most caution was known as miasma, a noxious form of "bad air". This was thought to be the principal cause of the disease. Doctors believed the herbs would counter the "evil" smells of the plague and prevent them from becoming infected. Though these particular theories about the plague's nature were incorrect, it is likely that the costume actually did afford the wearer some protection. The garments covered the body, shielding against splattered blood, lymph, and cough droplets, and the waxed robe prevented fleas (the true carriers of the plague) from touching the body or clinging to the linen.
The wide-brimmed leather hat indicated their profession. Doctors used wooden canes in order to point out areas needing attention and to examine patients without touching them. The canes were also used to keep people away and to remove clothing from plague victims.
History
The exact origins of the costume are unclear, as most depictions come from satirical writings and political cartoons. An early reference to plague doctors wearing masks is in 1373 when Johannes Jacobi recommends their use but he offers no physical description of what these masks looked like. The beaked plague doctor inspired costumes in Italian theater as a symbol of general horror and death, though some historians insist that the plague doctor was originally fictional and inspired the real plague doctors later. Depictions of the beaked plague doctor rose in response to superstition and fear about the unknown source of the plague. Often, these plague doctors were the last thing a patient would see before death; therefore, the doctors were seen as a foreboding of death.
The garments were first mentioned by a physician to King Louis XIII of France, Charles de Lorme, who wrote in a 1619 plague outbreak in Paris that he developed an outfit made of Moroccan goat leather, including boots, breeches, a long coat, hat, and gloves modeled after a soldier's canvas gown that went from the neck to the ankle. The garment was impregnated with similar fragrant items as the mask. De Lorme wrote that the mask had a "nose half a foot long, shaped like a beak, filled with perfume with only two holes, one on each side near the nostrils, but that can suffice to breathe and to carry along with the air one breathes the impression of the drugs enclosed further along in the beak." Recent research has revealed that strong caveats must be applied with regard to De Lorme's assertions, however.
The Genevan physician, Jean-Jacques Manget, in his 1721 work Treatise on the Plague written just after the Great Plague of Marseille, describes the costume worn by plague doctors at Nijmegen in 1636–1637. The costume forms the frontispiece of Manget's 1721 work. Their robes, leggings, hats, and gloves were also made of Morocco leather. This costume was also worn by plague doctors during the Naples Plague of 1656, which killed 145,000 people in Rome and 300,000 in Naples. In his work , published at Toulouse in May 1629, Irish physician Niall Ó Glacáin references the protective clothing worn by plague doctors, which included leather coats, gauntlets and long beak-like masks filled with fumigants.
Carnival
The costume is also associated with a character called ('The Plague Doctor'), who wears a distinctive plague doctor's mask. The Venetian mask was normally white, consisting of a hollow beak and round eye-holes covered with clear glass, and is one of the distinctive masks worn during the Carnival of Venice.
COVID-19
During the COVID-19 pandemic beginning in 2020, the plague doctor costume grew in popularity due to its relevance to the pandemic, with news reports of plague doctor-costumed individuals in public places and photos of people wearing plague doctor costumes appearing in social media.
See also
References
Footnotes
Works cited
Bauer, S. Wise, The Story of the World Activity Book Two: The Middle Ages : From the Fall of Rome to the Rise of the Renaissance, Peace Hill Press, 2003,
Boeckl, Christine M., Images of plague and pestilence: iconography and iconology, Truman State Univ Press, 2000,
Byfield, Ted, Renaissance: God in Man, A.D. 1300 to 1500: But Amid Its Splendors, Night Falls on Medieval Christianity, Christian History Project, 2010,
Byrne, Joseph Patrick, Encyclopedia of Pestilence, Pandemics, and Plagues, ABC-CLIO, 2008,
Carmichael, Ann G., "SARS and Plagues Past", in SARS in Context: Memory, history, policy, ed. by Jacalyn Duffin and Arthur Sweetman McGill-Queen's University Press, 2006,
Center for Advanced Study in Theatre Arts, Western European stages, Volume 14, CASTA, 2002,
Dolan, Josephine, Goodnow's History of Nursing, W. B. Saunders 1963 (Philadelphia and London), ,
Ellis, Oliver Coligny de Champfleur, A History of Fire and Flame, London: Simkin, Marshall, 1932; repr. Kessinger, 2004,
Goodnow, Minnie, Goodnow's history of nursing, W.B. Saunders Co., 1968, OCLC Number: 7085173
Glaser, Gabrielle, The Nose: A Profile of Sex, Beauty, and Survival, Simon & Schuster, 2003,
Grolier Incorporated, The Encyclopedia Americana, Volume 8; Volume 24, Grolier Incorporated, 1998,
Hall, Manly Palmer, Horizon, Philosophical Research Society, Inc., 1949
Hirst, Leonard Fabian, The conquest of plague: a study of the evolution of epidemiology, Clarendon Press, 1953,
Infectious Diseases Society of America, Reviews of Infectious Diseases, Volume 11, University of Chicago Press, 1989
Kenda, Barbara, Aeolian winds and the spirit in Renaissance architecture: Academia Eolia revisited, Taylor & Francis, 2006,
Killinger, Charles L., Culture and customs of Italy, Greenwood Publishing Group, 2005,
Nohl, Johannes, The Black Death: A Chronicle of the Plague, J. & J. Harper Edition 1969, ,
Manget, Jean-Jacques, Traité de la peste recueilli des meilleurs auteurs anciens et modernes, Geneva, 1721, online as PDF, 28Mb download
Martin, Sean, The Black Death, Book Sales, 2009,
Mentzel, Peter, A traveller's history of Venice, Interlink Books, 2006,
O'Donnell, Terence, History of life insurance in its formative years, American Conservation Company, 1936
Paton, Alex, "Cover image", QJM: An International Journal of Medicine, 100.4, 4 April 2007. (A commentary on the issue's cover photograph of The Posy Tree, Mapperton, Dorset.)
Pommerville, Jeffrey, Alcamo's Fundamentals of Microbiology: Body Systems, Jones & Bartlett Learning, 2009,
Pommerville, Jeffrey, Alcamo's Fundamentals of Microbiology, Jones & Bartlett Learning, 2010,
Reynolds, Richard C., On doctor[i]ng: stories, poems, essays, Simon & Schuster, 2001,
Sandler, Merton, Wine: a scientific exploration, CRC Press, 2003,
Sherman, Irwin W., The power of plagues, Wiley-Blackwell, 2006,
Stuart, David C., Dangerous garden: the quest for plants to change our lives, frances lincoln ltd, 2004,
Timbs, John, The Mirror of literature, amusement, and instruction, Volume 37, J. Limbird, 1841
Time-Life Books, What life was like in the age of chivalry: medieval Europe, AD 800-1500, 1997
Turner, Jack, Spice: The History of a Temptation, Random House, 2005,
Walker, Kenneth, The story of medicine, Oxford University Press, 1955
External links
Debunking Popular Misconceptions about Plague Doctor Costumes and How They Were Used
Doctor Schnabel's Plague Museum
zh-yue:瘟疫醫生
Medical equipment
Medieval European costume
Costume
Gas masks
1630s introductions
Safety clothing
Carnival costumes | Plague doctor costume | [
"Chemistry",
"Biology"
] | 1,941 | [
"Gas masks",
"Medical equipment",
"Medical technology"
] |
21,333,190 | https://en.wikipedia.org/wiki/Bogoliubov%20inner%20product | The Bogoliubov inner product (also known as the Duhamel two-point function, Bogolyubov inner product, Bogoliubov scalar product, or Kubo–Mori–Bogoliubov inner product) is a special inner product in the space of operators. The Bogoliubov inner product appears in quantum statistical mechanics and is named after theoretical physicist Nikolay Bogoliubov.
Definition
Let be a self-adjoint operator. The Bogoliubov inner product of any two operators X and Y is defined as
The Bogoliubov inner product satisfies all the axioms of the inner product: it is sesquilinear, positive semidefinite (i.e., ), and satisfies the symmetry property where is the complex conjugate of .
In applications to quantum statistical mechanics, the operator has the form , where is the Hamiltonian of the quantum system and is the inverse temperature. With these notations, the Bogoliubov inner product takes the form
where denotes the thermal average with respect to the Hamiltonian and inverse temperature .
In quantum statistical mechanics, the Bogoliubov inner product appears as the second order term in the expansion of the statistical sum:
References
Quantum mechanics
Statistical mechanics | Bogoliubov inner product | [
"Physics"
] | 259 | [
"Statistical mechanics",
"Theoretical physics",
"Quantum mechanics"
] |
21,333,258 | https://en.wikipedia.org/wiki/Bogoliubov%E2%80%93Parasyuk%20theorem | The Bogoliubov–Parasyuk theorem in quantum field theory states that renormalized Green's functions and matrix elements of the scattering matrix (S-matrix) are free of ultraviolet divergencies. Green's functions and scattering matrix are the fundamental objects in quantum field theory which determine basic physically measurable quantities. Formal expressions for Green's functions and S-matrix in any physical quantum field theory contain divergent integrals (i.e., integrals which take infinite values) and therefore formally these expressions are meaningless. The renormalization procedure is a specific procedure to make these divergent integrals finite and obtain (and predict) finite values for physically measurable quantities. The Bogoliubov–Parasyuk theorem states that for a wide class of quantum field theories, called renormalizable field theories, these divergent integrals can be made finite in a regular way using a finite (and small) set of certain elementary subtractions of divergencies.
The theorem guarantees that computed within the perturbation expansion Green's functions and matrix elements of the scattering matrix are finite for any renormalized quantum field theory. The theorem specifies a concrete procedure (the Bogoliubov–Parasyuk R-operation) for subtraction of divergences in any order of perturbation theory, establishes correctness of this procedure, and guarantees the uniqueness of the obtained results.
The theorem was proved by Nikolay Bogoliubov and Ostap Parasyuk in 1955. The proof of the Bogoliubov–Parasyuk theorem was simplified later.
See also
Renormalization
Krylov-Bogolyubov theorem on the existence of invariant measures in dynamics.
References
O. I. Zav'yalov (1994). "Bogolyubov's R-operation and the Bogolyubov–Parasyuk theorem", Russian Math. Surveys, 49(5): 67—76 (in English).
D. V. Shirkov (1994): "The Bogoliubov renormalization group", Russian Math. Surveys 49(5): 155—176.
Quantum field theory
Theorems in quantum mechanics | Bogoliubov–Parasyuk theorem | [
"Physics",
"Mathematics"
] | 453 | [
"Theorems in quantum mechanics",
"Quantum field theory",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics theorems"
] |
21,333,387 | https://en.wikipedia.org/wiki/List%20of%20mass%20spectrometry%20acronyms | This is a compilation of initialisms and acronyms commonly used in mass spectrometry.
A
ADI – Ambient desorption ionization
AE – Appearance energy
AFADESI – Air flow-assisted desorption electrospray ionization
AFAI – Air flow-assisted ionization
AFAPA – Aerosol flowing atmospheric-pressure afterglow
AGHIS – All-glass heated inlet system
AIRLAB – Ambient infrared laser ablation
AMS – Accelerator mass spectrometry
AMS – Aerosol mass spectrometer
AMU – Atomic mass unit
AP – Appearance potential
AP MALDI – Atmospheric pressure matrix-assisted laser desorption/ionization
APCI – Atmospheric pressure chemical ionization
API – Atmospheric pressure ionization
APPI – Atmospheric pressure photoionization
ASAP – Atmospheric Sample Analysis Probe
ASMS – American Society for Mass Spectrometry
B
BP – Base peak
BIRD – Blackbody infrared radiative dissociation
C
CRF – Charge remote fragmentation
CSR – Charge stripping reaction
CI – Chemical ionization
CA – Collisional activation
CAD – Collisionally activated dissociation
CID – Collision-induced dissociation
CRM – Consecutive reaction monitoring
CF-FAB – Continuous flow fast atom bombardment
CRIMS – Chemical reaction interface mass spectrometry
CTD – Charge transfer dissociation
D
DE – Delayed extraction
DADI – Direct analysis of daughter ions
DAPPI – Desorption atmospheric pressure photoionization
DEP – Direct exposure probe
DESI – Desorption electrospray ionization
DIOS – Desorption/ionization on silicon
DIP – Direct insertion probe
DART – Direct analysis in real time
DLI – Direct liquid introduction
DIA – Data independent acquisition
E
EA – Electron affinity
EAD – Electron-activated dissociation
ECD – Electron-capture dissociation
ECI – Electron capture ionization
EDD – Electron-detachment dissociation
EI – Electron ionization (or electron impact)
EJMS – European Journal of Mass Spectrometry
ESA – Electrostatic energy analyzer
ES/ESI – Electrospray ionisation
ETD – Electron-transfer dissociation
eV – Electronvolt
F
FAIMS – High-field asymmetric waveform ion mobility spectrometry
FAB – Fast atom bombardment
FIB – Fast ion bombardment
FD – Field desorption
FFR – Field-free region
FI – Field ionization
FT-ICR MS – Fourier transform ion cyclotron resonance mass spectrometer
FTMS – Fourier transform mass spectrometer
G
GDMS – Glow discharge mass spectrometry
H
HDX – Hydrogen/deuterium exchange
HCD – Higher-energy C-trap dissociation
I
ICAT – Isotope-coded affinity tag
ICP – Inductively coupled plasma
ICRMS – Ion cyclotron resonance mass spectrometer
IDMS – Isotope dilution mass spectrometry
IJMS – International Journal of Mass Spectrometry
IRMPD – Infrared multiphoton dissociation
IKES – Ion kinetic energy spectrometry
IMS – Ion mobility spectrometry
IMSC – International Mass Spectrometry Conference
IMSF – International Mass Spectrometry Foundation
IRMS – Isotope ratio mass spectrometry
IT – Ion trap
ITMS – Ion trap mass spectrometry
ITMS – Ion trap mobility spectrometry
iTRAQ – Isobaric tag for relative and absolute quantitation
J
JASMS – Journal of the American Society for Mass Spectrometry
JEOL – Japan Electro-Optics Laboratory
JMS – Journal of Mass Spectrometry
K
KER – Kinetic energy release
KERD – Kinetic energy release distribution
L
LCMS – Liquid chromatography–mass spectrometry
LD – Laser desorption
LDI – Laser desorption ionization
LI – Laser ionization
LMMS – Laser microprobe mass spectrometry
LIT – Linear ion trap
LSI – Liquid secondary ionization
LSII – Laserspray ionization inlet
M
MIKES – Mass-analyzed ion kinetic energy spectrometry
MS – Mass spectrometer
MS – Mass spectrometry
MS2 – Mass spectrometry/mass spectrometry, i.e. tandem mass spectrometry
MS/MS – Mass spectrometry/mass spectrometry, i.e. tandem mass spectrometry
MALDESI – Matrix-assisted laser desorption electrospray ionization
MALDI – Matrix-assisted laser desorption/ionization
MAII – Matrix-assisted inlet ionization
MAIV – Matrix-assisted ionization vacuum
MIMS – Membrane introduction mass spectrometry, membrane inlet mass spectrometry, membrane interface mass spectrometry
MCP – Microchannel plate
MSn – Multiple-stage mass spectrometry
MCP – Microchannel plate
MPI – Multiphoton ionization
MRM – Multiple reaction monitoring
N
NEMS-MS – Nanoelectromechanical systems mass spectrometry
NETD – Negative electron-transfer dissociation
NICI – Negative ion chemical ionization
NRMS – Neutralization reionization mass spectrometry
O
oa-TOF – Orthogonal acceleration time of flight
OMS – Organic Mass Spectrometry (journal)
P
PDI – Plasma desorption/ionization
PDMS – Plasma desorption mass spectrometry
PAD – Post-acceleration detector
PSD – Post-source decay
PyMS – Pyrolysis mass spectrometry
Q
QUISTOR – Quadrupole ion storage trap
QIT – Quadrupole ion trap
QMS – Quadrupole mass spectrometer
QTOF – Quadrupole time of flight
R
RCM – Rapid Communications in Mass Spectrometry
REIMS – Rapid evaporative ionization mass spectrometry
REMPI – Resonance enhanced multiphoton ionization
RGA – Residual gas analyzer
RI – Resonance ionization
S
SAII – Solvent-assisted ionization inlet
SELDI – Surface-enhanced laser desorption/ionization
SESI – Secondary electrospray ionization
SHRIMP – Sensitive high-resolution ion microprobe
SIFT – Selected ion flow tube
SILAC – Stable isotope labelling by amino acids in cell culture
SIM – Selected ion monitoring
SIMS – Secondary ion mass spectrometry
SIR – Selected ion recording
SNMS – Secondary neutral mass spectrometry
SRM – Selected reaction monitoring
SWIFT – Stored waveform inverse Fourier transform
SID – Surface-induced dissociation
SIR – Surface-induced reaction
SI – Surface ionization
SORI – Sustained off-resonance irradiation
T
TI – Thermal ionization
TIC – Total ion current
TICC – Total ion current chromatogram
TLF – Time-lag focusing
TMT – Tandem mass tags
TOF-MS – Time-of-flight mass spectrometer
V
VG – Vacuum Generators (company)
References
External links
Mass Spectroscopy Acronym Page at MIT
Mass spectrometry
Mass spectrometry | List of mass spectrometry acronyms | [
"Physics",
"Chemistry"
] | 1,425 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
21,333,500 | https://en.wikipedia.org/wiki/Electron%20capture%20ionization | Electron capture ionization is the ionization of a gas phase atom or molecule by attachment of an electron to create an ion of the form A^-. The reaction is
A + e^- ->[M]A^-
where the M over the arrow denotes that to conserve energy and momentum a third body is required (the molecularity of the reaction is three).
Electron capture can be used in conjunction with chemical ionization.
Electron-capture mass spectrometry
Electron-capture mass spectrometry (EC-MS) is a type of mass spectrometry that uses electron capture ionization to form negative ions from chemical compounds with positive electron affinities. The approach is particularly effective for electrophiles. In contrast to electron ionization, EC-MS uses low energy electrons in a gas discharge. EC-MS will cause less fragmentation of molecules compared to electron ionization.
Negative ion formation
Resonance electron capture
Resonance electron capture is also known as nondissociative EC. The compound captures an electron to form a radical anion. The energy of the electrons are about 0 eV. The electrons can be created in the Electron Ionization source with moderating gas such as H2, CH4, i-C4H10, NH3, N2, and Ar. After the ion captures the electron, the complex formed can stabilize during collisions and produce a stable anion that can be detected in a mass spectrometer.
AB + e− → AB−•
Dissociative resonance capture
In dissociative resonance capture, the compound fragments resulting in electron capture dissociation (ECD). ECD forms an anion fragment and a radical fragment. The energy of the electrons are from 0-15 eV, but the optimum energy can vary depending on the compound.
AB{} + e^- -> A^-{} + B^{\bullet}
Ion-pair formation
With electrons of energy greater than 10 ev, negative ions can also be formed through ion-pair formation.
AB + e− → A− + B+ + e−
Calibration of the mass spectrometer is important in electron capture ionization mode. A calibration compound is needed to ensure reproducibility in EC-MS. It is used to ensure that the mass scale used is correct and that the groups of ions are constant on a regular basis.
Fragmentation in ECI has been studied by tandem mass spectrometry.
The technique can be used with gas chromatography-mass spectrometry.
Electron capture detector
An electron capture detector most often uses a radioactive source to generate electrons used for ionization. Some examples of radioactive isotopes used are 3H, 63Ni, 85Kr, and 90Sr. The gas in the detector chamber is ionized by the radiation particles. Nitrogen, argon and helium are common carrier gases used in the ECD. Argon and helium need to be combined with another gas, such as methane, in order to prevent immediate conversion into metastable ions. The combination will extend the lifetime of the metastable ions (10−6 seconds). The methane will cool the electrons during the collisions. The addition of methane will enhance the ability to form negative ions under high pressure because it will adjust the thermal energy to be similar to the energy distribution of the ions. Methane is the most common gas used because it can produce many positive ions when it collides with electrons. These positive ions will then form low energy electrons used for ionization:
2CH4+ + 2e^- -> CH4+ + CH3+ + H + \underbrace{2e^-}_{(secondary)} + \underbrace{2e^-}_{(primary)}
An ECD is used in some gas chromatography systems.
Applications
EC-MS (Electron-capture mass spectrometry) has been used for identifying trace levels of chlorinated contaminants in the environment such as polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs) and dibenzofurans (PCDFs), and other polychlorinated compounds. Pesticide derivatives, nitrogen containing herbicides, and phosphorus-containing insecticides have also been detected in EC-MS.
Bile acids can detected in various body fluids by using GC-EC-MS. Oxidative damage can also be monitored in trace amounts by analyzing oxidized phenylalanine using GC-EC-MS.
Advantages
EC-MS is a sensitive ionization method. Forming negative ions through electron capture ionization is more sensitive than forming positive ions through chemical ionization.
It is a selective ionization technique that can prevent the formation of common matrices found in environmental contaminants during ionization. Electron capture ionization will have less interference from these matrices compared to electron ionization.
Electron capture mass spectra can distinguish between certain isomers that EI-MS cannot.
Limitations
Different energies in the ion source can cause variations in negative ion formation and make the mass spectra difficult to duplicate. Results shown in the mass spectrum can vary from instrument to instrument.
The temperature of the ion source needs to be monitored. An increase in fragment ions occurs at higher temperatures. Lower temperatures will lower the energy of electrons. Set temperatures can vary, but it is important for electron energy to approach thermal levels for resonance electron capture to occur.
Pressure of the added enhancement gas needs to be determined. Increasing the pressure will help stabilize the anions and extend the lifetimes of the negative ions. If the pressure is too high, not as many ions can exit the ion source.
Analysis should be done using low sample loads for GC-EC-MS. The amount of sample will affect the ion abundance and cause variations in data.
See also
Electron capture dissociation
References
Ion source | Electron capture ionization | [
"Physics"
] | 1,212 | [
"Ion source",
"Mass spectrometry",
"Spectrum (physical sciences)"
] |
21,337,035 | https://en.wikipedia.org/wiki/Balayage | In potential theory, a mathematical discipline, balayage (from French: balayage "scanning, sweeping") is a method devised by Henri Poincaré for reconstructing an harmonic function in a domain from its values on the boundary of the domain.
In modern terms, the balayage operator maps a measure μ on a closed domain D to a measure ν on the boundary ∂ D, so that the Newtonian potentials of μ and ν coincide outside . The procedure is called balayage since the mass is "swept out" from D onto the boundary.
For x in D, the balayage of δx yields the harmonic measure νx corresponding to x. Then the value of a harmonic function f at x is equal to
References
Potential theory | Balayage | [
"Mathematics"
] | 156 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Potential theory",
"Mathematical relations"
] |
21,337,146 | https://en.wikipedia.org/wiki/Electron-beam%20furnace | An electron-beam furnace (EB furnace) is a type of vacuum furnace employing high-energy electron beam in vacuum as the means for delivery of heat to the material being melted. It is one of the electron-beam technologies.
Use
Electron-beam furnaces are used for production and refining of high-purity metals (especially titanium, vanadium, tantalum, niobium, hafnium, etc.) and some exotic alloys. The EB furnaces use a hot cathode for production of electrons and high voltage for accelerating them towards the target to be melted.
Alternatives
An alternative for an electron-beam furnace can be an electric arc furnace in vacuum. Somewhat similar technologies are electron-beam melting and electron-beam welding.
References
Industrial furnaces
Electron beams in manufacturing | Electron-beam furnace | [
"Chemistry"
] | 162 | [
"Metallurgical processes",
"Industrial furnaces"
] |
6,724,915 | https://en.wikipedia.org/wiki/Hybrot | A hybrot (short for "hybrid robot") is a cybernetic organism in the form of a robot controlled by a computer consisting of both electronic and biological elements. The biological elements are typically rat neurons connected to a computer chip.
This feat was first accomplished in 2003 by Dr. Steve M. Potter, a professor of biomedical engineering at the Georgia Institute of Technology:
What separates a hybrot from a cyborg is that the latter term is commonly used to refer to a cybernetically enhanced human or animal; while a hybrot is an entirely new type of creature constructed from organic and artificial materials. It is perhaps helpful to think of the hybrot as "semi-living", a term also used by the hybrot's inventors.
Another interesting feature of the hybrot is its longevity. Neurons separated from a living brain usually die after only a couple of months. However, a specially designed incubator built around a gas-tight culture chamber selectively permeable to carbon dioxide, but impermeable to water vapor, reduces the risk of contamination and evaporation, and may extend the life of the hybrot to one to two years.
See also
Animat
Artificial intelligence
Biorobotics
Brain–computer interface
Neurorobotics
Semi-biotic systems
Xenobot
References
Sources
Shkolnik, A. C. Neurally Controlled Simulated Robot: Applying Cultured Neurons to Handle and Approach/Avoidance Task in Real Time, and a Framework for Studying Learning In Vitro. In: Potter, S. M. & Lu, J.: Dept. of Mathematics and Computer Science. Emory University, Atlanta (2003).
External links
Georgia Tech Researchers Use Lab Cultures to Control Robotic Device
Georgia Tech researchers use lab cultures to control robotic device
A hybrot, the Rat-Brained Robot
Multielectrode Array Art – A hybrot artist.
Rise of the rat-brained robots
FuturePundit: Hybrot Robot Operated By Rat Brain Neurons
How to Culture, Record and Stimulate Neuronal Networks on Micro-electrode Arrays (MEAs)
Biocybernetics
Cybernetics
- | Hybrot | [
"Physics",
"Technology"
] | 453 | [
"Physical systems",
"Machines",
"Robots"
] |
6,725,153 | https://en.wikipedia.org/wiki/KT88 | The KT88 is a beam tetrode/kinkless tetrode (hence "KT") vacuum tube for audio amplification.
Features
The KT88 fits a standard eight-pin octal socket and has similar pinout and applications as the 6L6 and EL34. Specifically designed for audio amplification, the KT88 has higher plate power and voltage ratings than the American 6550. It is one of the largest tubes in its class and can handle significantly higher plate voltages than similar tubes, up to 800 volts. A KT88 push-pull pair in class AB1 fixed bias is capable of 100 watts of output with 2.5% total harmonic distortion or up to about 50W at low distortion in hi-fi applications. The transmitting tubes TT21 and TT22 have almost identical transfer characteristics to KT88 but a different pinout, and by virtue of their anode being connected to the top cap have a higher plate voltage rating (1.25 kilovolt) and a higher power output capability of 200 watts in class AB1 push–pull.
The screen grid is sometimes tied to the anode so that it becomes effectively a triode with a lower maximum power output.
History
The KT88 was introduced by GEC in 1956 as a larger variant of the KT66. It was manufactured in the U.K. by the MOV (Marconi-Osram Valve) subsidiary of G.E.C, also labelled as IEC/Mullard, and, in the U.S., Genalex Gold Lion.
As of 2022, KT88 valves are produced by New Sensor Corporation (Genalex Gold Lion and Electro-Harmonix brands) in Saratov, Russia, JJ Electronic in Čadca, Slovakia and Hengyang Electronics (Psvane brand) at former Guiguang factory in Foshan, China.
NOS examples in good condition are extremely rare. Due to its availability and characteristics, the KT88 is popular in hi-fi production amplifiers.
Historically, it has been far more popular with high fidelity stereo manufacturers than guitar amplifier builders, given its characteristics of high-power and low-distortion. Due to these characteristics, it is regularly used to replace 6550 tubes by end users seeking a guitar amplifier tone with less distortion. Some of the amplifiers which shipped with the KT88 power tube include the Hiwatt, Marshall Major, and some Ampeg models.
Characteristics
See also
KT66
KT90
6L6
6CA7 / EL34
6V6
807
References
External links
Reviews of KT88 tubes.
Tube Data Archive, thousands of tube data sheets
Vacuum tubes
Guitar amplification tubes
Audiovisual introductions in 1956 | KT88 | [
"Physics"
] | 571 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
6,725,298 | https://en.wikipedia.org/wiki/ORDATA | ORDATA is a United States government database of landmines and other unexploded ordnance, developed to assist humanitarian demining work. The original version of ORDATA released in 1997 was CD-ROM based, and incorporated material from the earlier Minefacts program. ORDATA 2.0 was distributed on a CD-ROM and on the Internet. The database is hosted on the Center for International Stabilization and Recovery website, a part of James Madison University. In 2014-15 the interface underwent a revision and the data partially updated. The new site is known as the Collaborative ORDnance Data Repository (CORD) and is available online. An offline version is in development.
References
External links
Collaborative ORDnance Data Repository
Center for International Stabilization and Recovery
James Madison University
Mine warfare
Mine action | ORDATA | [
"Engineering"
] | 160 | [
"Military engineering",
"Mine warfare"
] |
6,725,558 | https://en.wikipedia.org/wiki/Nature%20Medicine | Nature Medicine is a monthly peer-reviewed medical journal published by Nature Portfolio covering all aspects of medicine. It was established in 1995. The journal seeks to publish research papers that "demonstrate novel insight into disease processes, with direct evidence of the physiological relevance of the results". As with other Nature journals, there is no external editorial board, with editorial decisions being made by an in-house team, although peer review by external expert referees forms a part of the review process. The editor-in-chief is João Monteiro.
According to the Journal Citation Reports, the journal has a 2021 impact factor of 58.7, ranking it 1st out of 296 journals in the category "Biochemistry & Molecular Biology".
Abstracting and indexing
Nature Medicine is abstracted and indexed in:
Science Citation Index Expanded
Web of Science
Scopus
UGC
References
External links
Academic journals established in 1995
Nature Research academic journals
General medical journals
Monthly journals
English-language journals | Nature Medicine | [
"Chemistry",
"Biology"
] | 189 | [
"Biochemistry",
"Biochemistry journals",
"Biochemistry literature",
"Molecular biology"
] |
6,727,704 | https://en.wikipedia.org/wiki/Metering%20pump | A metering pump moves a precise volume of liquid in a specified time period providing an accurate volumetric flow rate.
Delivery of fluids in precise adjustable flow rates is sometimes called metering. The term "metering pump" is based on the application or use rather than the exact kind of pump used, although a couple types of pumps are far more suitable than most other types of pumps.
Although metering pumps can pump water, they are often used to pump chemicals, solutions, or other liquids. Many metering pumps are rated to be able to pump into a high discharge pressure. They are typically made to meter at flow rates which are practically constant (when averaged over time) within a wide range of discharge (outlet) pressure. Manufacturers provide each of their models of metering pumps with a maximum discharge pressure rating against which each model is guaranteed to be able to pump against. An engineer, designer, or user should ensure that the pressure and temperature ratings and wetted pump materials are compatible for the application and the type of liquid being pumped.
Most metering pumps have a pump head and a motor. The liquid being pumped goes through the pump head, entering through an inlet line and leaving through an outlet line. The motor is commonly an electric motor which drives the pump head.
Dispensing pump
Some metering pumps can be used for dispensing. A metering pump is designed to deliver a continuous rate of flow, however, a dispensing pump is designed to deliver a precise total amount.
Piston pumps
Many metering pumps are piston-driven. Piston pumps are positive displacement pumps which can be designed to pump at practically constant flow rates (averaged over time) against a wide range of discharge pressure, including high discharge pressures of thousands of psi.
Piston-driven metering pumps commonly work as follows: There is a piston (sometimes called plunger), typically cylindrical, which can go in and out of a correspondingly shaped chamber in the pump head. The inlet and outlet lines are joined to the piston chamber. There are two check valves, often ball check valves, attached to the pump head, one at the inlet line and the other at the outlet line. The inlet valve allows flow from the inlet line to the piston chamber, but not in the reverse direction. The outlet valve allows flow from the chamber to the outlet line, but not in reverse. The motor repeatedly moves the piston into and out of the piston chamber, causing the volume of the chamber to repeatedly become smaller and larger. When the piston moves out, a vacuum is created. Low pressure in the chamber causes liquid to enter and fill the chamber through the inlet check valve, but higher pressure at the outlet causes the outlet valve to shut. Then when the piston moves in, it pressurizes the liquid in the chamber. High pressure in the chamber causes the inlet valve to shut and forces the outlet valve to open, forcing liquid out at the outlet. These alternating suction and discharge strokes are repeated over and over to meter the liquid. In the back of the chamber, there is packing around the piston or a doughnut-shaped seal with a toroid-shaped sphincter-like spring inside compressing the seal around the piston. This holds the fluid pressure when the piston slides in and out and makes the pump leak-tight. The packing or seals can wear out after prolonged use and can be replaced. The metering rate can be adjusted by varying the strokelength by which the piston moves back and forth or varying the speed of the piston motion.
A single-piston pump delivers liquid to the outlet only during the discharge stroke. If the piston's suction and discharge strokes occur at the same speed and liquid is metered out half the time the pump is working, then the overall metering rate averaged over time equals half the average flow rate during the discharge stroke. Some single-piston pumps may have a constant slow piston motion for discharge and a quick retract motion for refilling the pump head. In such cases, the overall metering rate is practically equal to the pumping rate during the discharge stroke.
Pumps used in high-pressure chromatography
Pumps used in high-pressure chromatography such as HPLC and ion chromatography are much like small piston metering pumps. For wear resistance and chemical resistance to solvents, etc., typically the pistons are made of artificial sapphire and the ball check valves have ruby balls and sapphire seats. To produce good chromatograms, it is desirable to have a pumping flow rate as constant as possible. Either a single piston pump with a quick refill is used or a double pump head with coordinated piston strokes is used to provide as constant a pumping rate as possible.
Diaphragm and peristaltic pumps
In order to avoid leakage at the packing or seal particularly when a liquid is dangerous, toxic, or noxious, diaphragm pumps are used for metering. Diaphragm pumps have a diaphragm through which repeated compression/decompression motion is transmitted. The liquid does not penetrate through the diaphragm, so the liquid inside the pump is sealed off from the outside. Such motion changes the volume of a chamber in the pump head so that liquid enters through an inlet check valve during decompression and exits through an outlet check valve during compression, in a manner similar to piston pumps. Diaphragm pumps can also be made which discharge at fairly high pressure. Diaphragm metering pumps are commonly hydraulically driven.
Peristaltic pumps use motor-driven rollers to roll along flexible tubing, compressing it to push forward a liquid inside. Although peristaltic pumps can be used to meter at lower pressures, the flexible tubing is limited in the level of pressure it can withstand.
Possible problems
The maximum pressure rating of a metering pump is actually the top of the discharge pressure range the pump is guaranteed to pump against at a reasonably controllable flow rate. The pump itself is a pressurizing device often capable of exceeding its pressure rating, although not guaranteed to. For this reason, if there is any stop valve downstream of the pump, a pressure relief valve should be placed in between to prevent overpressuring of the tubing or piping line in case the stop valve is inadvertently shut while the pump is running. The relief valve setting should be below the maximum pressure rating that the piping, tubing, or any other components there could withstand.
Liquids are only very slightly compressible. This property of liquids lets metering pumps discharge liquids at high pressure. Since a liquid can be only slightly compressed during a discharge stroke, it is forced out of the pump head. Gases are much more compressible. Metering pumps are not good at pumping gases. Sometimes, a metering or similar pump has to be primed before operation, i. e. the pump head filled with the liquid to be pumped. When gas bubbles enter a pump head, the compression motion compresses the gas but has a hard time forcing it out of the pump head. The pump may stop pumping liquid with gas bubbles in the pump head even though mechanically the pump is going through the motions, repeatedly compressing and decompressing the bubbles. To prevent this type of "vapor lock", chromatography solvents are often degassed before pumping.
If the pressure at the outlet is lower than the pressure at the inlet and remains that way in spite of the pumping, then this pressure difference opens both check valves simultaneously and the liquid flows through the pump head uncontrollably from inlet to outlet. This can happen whether the pump is working or not. This situation can be avoided by placing a correctly rated positive pressure differential check valve downstream of the pump. Such a valve will only open if a minimum rated pressure differential across the valve is exceeded, something which most high-pressure metering pumps can easily exceed.
References
External links
The Ins and Outs of Metering Pumps Metering pumps offer a high degree of accuracy and are suited for water treatment, chemical processing and laboratory dispensing
Fluid dynamics | Metering pump | [
"Physics",
"Chemistry",
"Engineering"
] | 1,652 | [
"Pumps",
"Turbomachinery",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Piping",
"Fluid dynamics"
] |
6,729,866 | https://en.wikipedia.org/wiki/Phase%20offset%20modulation | Phase offset modulation works by overlaying two instances of a periodic waveform on top of each other. (In software synthesis, the waveform is usually generated by using a lookup table.) The two instances of the waveform are kept slightly out of sync with each other, as one is further ahead or further behind in its cycle. The values of both of the waveforms are either multiplied together, or the value of one is subtracted from the other.
This generates an entirely new waveform with a drastically different shape. For example, one sawtooth (ramp) wave subtracted from another will create a pulse wave, with the amount of offset (i.e. the difference between the two waveforms' starting points) dictating the duty cycle. If you slowly change the offset amount, you create pulse-width modulation.
Using this technique, not only can a ramp wave create pulsewidth modulation, but any other waveform can achieve a comparable effect.
Wave mechanics | Phase offset modulation | [
"Physics"
] | 202 | [
"Waves",
"Wave mechanics",
"Physical phenomena",
"Classical mechanics"
] |
6,729,896 | https://en.wikipedia.org/wiki/Surface%20reconstruction | Surface reconstruction refers to the process by which atoms at the surface of a crystal assume a different structure than that of the bulk. Surface reconstructions are important in that they help in the understanding of surface chemistry for various materials, especially in the case where another material is adsorbed onto the surface.
Basic principles
In an ideal infinite crystal, the equilibrium position of each individual atom is determined by the forces exerted by all the other atoms in the crystal, resulting in a periodic structure. If a surface is introduced to the surroundings by terminating the crystal along a given plane, then these forces are altered, changing the equilibrium positions of the remaining atoms. This is most noticeable for the atoms at or near the surface plane, as they now only experience inter-atomic forces from one direction. This imbalance results in the atoms near the surface assuming positions with different spacing and/or symmetry from the bulk atoms, creating a different surface structure. This change in equilibrium positions near the surface can be categorized as either a relaxation or a reconstruction.
Relaxation refers to a change in the position of surface atoms relative to the bulk positions, while the bulk unit cell is preserved at the surface. Often this is a purely normal relaxation: that is, the surface atoms move in a direction normal to the surface plane, usually resulting in a smaller-than-usual inter-layer spacing. This makes intuitive sense, as a surface layer that experiences no forces from the open region can be expected to contract towards the bulk. Most metals experience this type of relaxation. Some surfaces also experience relaxations in the lateral direction as well as the normal, so that the upper layers become shifted relative to layers further in, in order to minimize the positional energy.
Reconstruction refers to a change in the two-dimensional structure of the surface layers, in addition to changes in the position of the entire layer. For example, in a cubic material the surface layer might re-structure itself to assume a smaller two-dimensional spacing between the atoms, as lateral forces from adjacent layers are reduced. The general symmetry of a layer might also change, as in the case of the Pt (100) surface, which reconstructs from a cubic to a hexagonal structure. A reconstruction can affect one or more layers at the surface and can either conserve the total number of atoms in a layer (a conservative reconstruction) or have a greater or lesser number than in the bulk (a non-conservative reconstruction).
Reconstruction due to adsorption
The relaxations and reconstructions considered above would describe the ideal case of atomically clean surfaces in vacuum, in which the interaction with another medium is not considered. However, reconstructions can also be induced or affected by the adsorption of other atoms onto the surface, as the interatomic forces are changed. These reconstructions can assume a variety of forms when the detailed interactions between different types of atoms are taken into account, but some general principles can be identified.
The reconstruction of a surface with adsorption will depend on the following factors:
The composition of the substrate and of the adsorbate.
The coverage of the substrate surface layers and of the adsorbate, measured in monolayers.
The ambient conditions (i.e. temperature, gas pressure, etc.).
Composition plays an important role in that it determines the form that the adsorption process takes, whether by relatively weak physisorption through van der Waals interactions or stronger chemisorption through the formation of chemical bonds between the substrate and adsorbate atoms. Surfaces that undergo chemisorption generally result in more extensive reconstructions than those that undergo physisorption, as the breaking and formation of bonds between the surface atoms alter the interaction of the substrate atoms as well as the adsorbate.
Different reconstructions can also occur depending on the substrate and adsorbate coverages and the ambient conditions, as the equilibrium positions of the atoms are changed depending on the forces exerted. One example of this occurs in the case of In adsorbed on the Si(111) surface, in which the two differently reconstructed phases of Si(111)-In and Si(111)-In (in Wood's notation, see below) can actually coexist under certain conditions. These phases are distinguished by the In coverage in the different regions and occur for certain ranges of the average In coverage.
Notation of reconstructions
In general, the change in a surface layer's structure due to a reconstruction can be completely specified by a matrix notation proposed by Park and Madden. If and are the basic translation vectors of the two-dimensional structure in the bulk, and and are the basic translation vectors of the superstructure or reconstructed plane, then the relationship between the two sets of vectors can be described by the following equations:
so that the two-dimensional reconstruction can be described by the matrix
Note that this system does not describe any relaxation of the surface layers relative to the bulk inter-layer spacing, but only describes the change in the individual layer's structure.
Surface reconstructions are more commonly given in Wood's notation, which reduces the matrix above into a more compact notation
X(hkl) m × n - Rφ,
which describes the reconstruction of the (hkl) plane (given by its Miller indices). In this notation, the surface unit cell is given as multiples of the nonreconstructed surface unit cell with the unit cell vectors a and b. For example, a calcite(104) (2×1) reconstruction means that the unit cell is twice as long in direction a and has the same length in direction b. If the unit cell is rotated with respect to the unit cell of the nonreconstructed surface, the angle φ is given in addition (usually in degrees). This notation is often used to describe reconstructions concisely, but does not directly indicate changes in the layer symmetry (for example, square to hexagonal).
Measurement of reconstructions
Determination of a material's surface reconstruction requires a measurement of the positions of the surface atoms that can be compared to a measurement of the bulk structure. While the bulk structure of crystalline materials can usually be determined by using a diffraction experiment to determine the Bragg peaks, any signal from a reconstructed surface is obscured due to the relatively tiny number of atoms involved.
Special techniques are thus required to measure the positions of the surface atoms, and these generally fall into two categories: diffraction-based methods adapted for surface science, such as low-energy electron diffraction (LEED) or Rutherford backscattering spectroscopy, and atomic-scale probe techniques such as scanning tunneling microscopy (STM) or atomic force microscopy. Of these, STM has been most commonly used in recent history due to its very high resolution and ability to resolve aperiodic features.
Examples of reconstructions
To allow a better understanding of the variety of reconstructions in different systems, examine the following examples of reconstructions in metallic, semiconducting and insulating materials.
Silicon
A very well known example of surface reconstruction occurs in silicon, a semiconductor commonly used in a variety of computing and microelectronics applications. With a diamond-like face-centered cubic (fcc) lattice, it exhibits several different well-ordered reconstructions depending on temperature and on which crystal face is exposed.
When Si is cleaved along the (100) surface, the ideal diamond-like structure is interrupted and results in a 1×1 square array of surface Si atoms. Each of these has two dangling bonds remaining from the diamond structure, creating a surface that can obviously be reconstructed into a lower-energy structure. The observed reconstruction is a 2×1 periodicity, explained by the formation of dimers, which consist of paired surface atoms, decreasing the number of dangling bonds by a factor of two. These dimers reconstruct in rows with a high long-range order, resulting in a surface of filled and empty rows. LEED studies and calculations also indicate that relaxations as deep as five layers into the bulk are also likely to occur.
The Si (111) structure, by comparison, exhibits a much more complex reconstruction. Cleavage along the (111) surface at low temperatures results in another 2×1 reconstruction, differing from the (100) surface by forming long π-bonded chains in the first and second surface layers. However, when heated above 400 °C, this structure converts irreversibly to the more complicated 7×7 reconstruction. In addition, a disordered 1×1 structure is regained at temperatures above 850 °C, which can be converted back to the 7×7 reconstruction by slow cooling.
The 7×7 reconstruction is modeled according to a dimer-adatom-stacking fault (DAS) model constructed by many research groups over a period of 25 years. Extending through the five top layers of the surface, the unit cell of the reconstruction contains 12 adatoms and 2 triangular subunits, 9 dimers, and a deep corner hole that extends to the fourth and fifth layers. This structure was gradually inferred from LEED and RHEED measurements and calculation, and was finally resolved in real space by Gerd Binnig, Heinrich Rohrer, Ch. Gerber and E. Weibel as a demonstration of the STM, which was developed by Binnig and Rohrer at IBM's Zurich Research Laboratory. The full structure with positions of all reconstructed atoms has also been confirmed by massively parallel computation.
A number of similar DAS reconstructions have also been observed on Si (111) in non-equilibrium conditions in a (2n + 1)×(2n + 1) pattern and include 3×3, 5×5 and 9×9 reconstructions. The preference for the 7×7 reconstruction is attributed to an optimal balance of charge transfer and stress, but the other DAS-type reconstructions can be obtained under conditions such as rapid quenching from the disordered 1×1 structure.
Gold
The structure of the Au (100) surface is an interesting example of how a cubic structure can be reconstructed into a different symmetry, as well as the temperature dependence of a reconstruction. In the bulk gold is an (fcc) metal, with a surface structure reconstructed into a distorted hexagonal phase. This hexagonal phase is often referred to as a (28×5) structure, distorted and rotated by about 0.81° relative to the [011] crystal direction. Molecular-dynamics simulations indicate that this rotation occurs to partly relieve a compressive strain developed in the formation of this hexagonal reconstruction, which is nevertheless favored thermodynamically over the unreconstructed structure. However, this rotation disappears in a phase transition at approximately T = 970 K, above which an un-rotated hexagonal structure is observed.
A second phase transition is observed at T = 1170 K, in which an order–disorder transition occurs, as entropic effects dominate at high temperature. The high-temperature disordered phase is explained as a quasi-melted phase in which only the surface becomes disordered between 1170 K and the bulk melting temperature of 1337 K. This phase is not completely disordered, however, as this melting process allows the effects of the substrate interactions to become important again in determining the surface structure. This results in a recovery of the square (1×1) structure within the disordered phase and makes sense as at high temperatures the energy reduction allowed by the hexagonal reconstruction can be presumed to be less significant.
Footnotes
Bibliography
Oura, K.; Lifshits, V. G.; Saranin, A. A.; Zotov, A. V.; and Katayama, M. (2003) Surface Science: An Introduction. Berlin: Springer-Verlag. .
Condensed matter physics | Surface reconstruction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,401 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
6,730,049 | https://en.wikipedia.org/wiki/Walloon%20forge | A Walloon forge (or Walloon process) is a type of finery forge that decarbonizes pig iron into wrought iron.
The process was conceived in the Liège region, and from there extended to France, then England around the end of the 15th century. Louis de Geer brought it to Roslagen in Sweden at the beginning of the 17th century, with Walloon blacksmiths.
The Walloon process spread to Sweden in the Uppland province north of Stockholm, where it was used to produce a specific kind of wrought iron called oregrounds iron.
In Sweden
The source material was pig iron produced by a blast furnace using charcoal and the manganese rich iron ore from the Dannemora mine. A V-shaped hearth using charcoal was used to heat up the pig iron bar that was presented to a tuyere that decarbonized it and made it melt and fall in drops that solidified in a pool of slag where the decarburization continued. The iron drops were picked up with an iron bar and presented again in front of the tuyere and one by one agglomerated into a ball. That heterogeneous iron was full of slag and the carbon content ranged from pure iron to nearly pig iron. It was therefore reheated in a chafery and hammered and folded using a waterwheel powered trip hammer.
The ore from Dannemora was very low in sulphur and high in manganese. It is possible the manganese bonded with the impurities during the oxidation, creating a pretty pure wrought iron. The use of charcoal prevented the contamination with impurities usually associated with the usage of coal or coke, of which Sweden has very little (although coal was mined in Höganäs, Scania County from 1797). In England, the chafery might use coal or coke, as in this stage the iron is solidified and the contamination remain low.
The iron was sold to England, where it was recarbonized into blister steel using the cementation process. This steel still contained some slag, and if the carbon was around 1% at the surface, it was lower in the center. The blister steel was than purchased by Benjamin Huntsman who melted it in crucibles heated in coke-fired ovens and poured it. This modern crucible steel was different from the medieval wootz from India, but was homogeneous and without slag.
References
Steelmaking
Metallurgical processes | Walloon forge | [
"Chemistry",
"Materials_science"
] | 504 | [
"Metallurgical processes",
"Steelmaking",
"Metallurgy"
] |
6,732,326 | https://en.wikipedia.org/wiki/Cn3D | Cn3D is a Windows, Macintosh and Unix-based software from the United States National Library of Medicine that acts as a helper application for web browsers to view three-dimensional structures from The National Center for Biotechnology Information's Entrez retrieval service. It "simultaneously displays structure, sequence, and alignment, and now has powerful annotation and alignment editing features", according to its official site. Cn3D is in public domain with source code available.
The latest version of the software 4.3.1 was released 06 Dec 2013. This version has the ability to view superpositions of 3D structures with similar biological units and an enhanced version of the Vector Alignment Search Tool (VAST).
See also
List of molecular graphics systems
Molecular graphics
List of software for molecular mechanics modeling
References
External links
Cn3D Home Page
source code tarball of NCBI C++ toolkit which includes Cn3D
Bioinformatics software
Molecular modelling software
Free software programmed in C++
Free science software
Windows multimedia software
MacOS multimedia software
Science software for Linux
Unix Internet software
Software that uses wxWidgets | Cn3D | [
"Chemistry",
"Biology"
] | 221 | [
"Molecular modelling software",
"Molecular physics",
"Computational chemistry software",
"Bioinformatics software",
"Bioinformatics",
"Molecular modelling",
"Molecular physics stubs"
] |
6,732,328 | https://en.wikipedia.org/wiki/Target-site%20overlap | In a zinc finger protein, certain sequences of amino acid residues are able to recognise and bind to an extended target-site of four or even five nucleotides When this occurs in a ZFP in which the three-nucleotide subsites are contiguous, one zinc finger interferes with the target-site of the zinc finger adjacent to it, a situation known as target-site overlap. For example, a zinc finger containing arginine at position -1 and aspartic acid at position 2 along its alpha-helix will recognise an extended sequence of four nucleotides of the sequence 5'-NNG(G/T)-3'. The hydrogen bond between Asp2 and the N4 of either a cytosine or adenine base paired to the guanine or thymine, respectively defines these two nucleotides at the 3' position, defining a sequence that overlaps into the subsite of any zinc finger that may be attached N-terminally.
Target-site overlap limits the modularity of those zinc fingers which exhibit it, by restricting the number of situations to which they can be applied. If some of the zinc fingers are restricted in this way, then a larger repertoire is required to address the situations in which those zinc fingers cannot be used. Target-site overlap may also affect the selection of zinc fingers during by display, in cases where amino acids on a non-randomised finger, and the bases of its associated subsite, influence the binding of residues on the adjacent finger which contains the randomised residues. Indeed, attempts to derive zinc finger proteins targeting the 5'-(A/T)NN-3' family of sequences by site-directed mutagenesis of finger two of the C7 protein were unsuccessful due to the Asp2 of the third finger of said protein.
The extent to which target-site overlap occurs is largely unknown, with a variety of amino acids having shown involvement in such interactions. When interpreting the zinc finger repertoires presented by investigations using ZFP phage display, it is important to appreciate the effects that the rest of the zinc finger framework may have had in these selections. Since the problem only appears to occur in a limited number of cases, the issue is nullified in most situations in which there are a variety of suitable targets to choose from and only becomes a real issue if binding to a specific DNA sequence is required (e.g. blocking binding by endogenous DNA-binding proteins).
References
See also
Zinc finger chimera
Zinc finger protein
Molecular biology | Target-site overlap | [
"Chemistry",
"Biology"
] | 524 | [
"Biochemistry",
"Molecular biology"
] |
6,732,448 | https://en.wikipedia.org/wiki/Endoglin | Endoglin (ENG) is a type I membrane glycoprotein located on cell surfaces and is part of the TGF beta receptor complex. It is also commonly referred to as CD105, END, FLJ41744, HHT1, ORW and ORW1. It has a crucial role in angiogenesis, therefore, making it an important protein for tumor growth, survival and metastasis of cancer cells to other locations in the body.
Gene and expression
The human endoglin gene is located on human chromosome 9 with location of the cytogenic band at 9q34.11. Endoglin glycoprotein is encoded by 39,757 bp and translates into 658 amino acids.
The expression of the endoglin gene is usually low in resting endothelial cells. This, however, changes once neoangiogenesis begins and endothelial cells become active in places like tumor vessels, inflamed tissues, skin with psoriasis, vascular injury and during embryogenesis. The expression of the vascular system begins at about 4 weeks and continues after that. Other cells in which endoglin is expressed consist of monocytes, especially those transitioning into macrophages, low expression in normal smooth muscle cells, high expression vascular smooth muscle cells and in kidney and liver tissues undergoing fibrosis.
Structure
The glycoprotein consists of a homodimer of 180 kDA stabilized by intermolecular disulfide bonds. It has a large extracellular domain of about 561 amino acids, a hydrophobic transmembrane domain and a short cytoplasmic tail domain composed of 45 amino acids. The 260 amino acid region closest to the extracellular membrane is referred to as the ZP domain (or, more correctly, ZP module). The outermost extracellular region is termed as the orphan domain (or, more correctly, orphan region (OR)) and it is the part that binds ligands such as BMP-9.
There are two isoforms of endoglin created by alternative splicing: the long isoform (L-endoglin) and the short isoform (S-endoglin). However, the L-isoform is expressed to a greater extent than the S-isoform. A soluble form of endoglin can be produced by the proteolytic cleaving action of metalloproteinase MMP-14 in the extracellular domain near the membrane.
It has been found on endothelial cells in all tissues, activated macrophages, activated monocytes, lymphoblasts fibroblasts, and smooth muscle cells. Endoglin was first identified using monoclonal antibody (mAb) 44G4 but more mAbs against endoglin have been discovered, giving more ways to identify it in tissues.
It is suggested that endoglin has 5 potential N-linked glycosylation sites in the N-terminal domain (of which N102 was experimentally observed in the crystal structure of the orphan region ()) and an O-glycan domain near the membrane domain that is rich in Serine and Threonine. The cytoplasmic tail contains a PDZ-binding motif that allows it to bind to PDZ containing proteins and interact with them. It contains an Arginine-Glycine-Aspartic Acid (RGD) tripeptide sequence that enables cellular adhesion, through the binding of integrins or other RGD binding receptors that are present in the extracellular matrix (ECM). This RGD sequence on endoglin is the first RGD sequence identified on endothelial tissue.
X-ray crystallographic structures of human endoglin () and its complex with ligand BMP-9 () revealed that the orphan region of the protein (residues E26-S337) consists of two domains (OR1 and OR2, corresponding to residues E36-T46 + T200-C330 and residues S47-R199, respectively) with a new fold resulting from gene duplication and circular permutation. The ZP module (residues P338-G581), whose ZP-N and ZP-C moieties (residues T349-L443 and N444-S576, respectively) are closely packed against each other, mediates the homodimerization of endoglin by forming an intermolecular disulfide bond that involves cysteine 516. Together with a second intermolecular disulfide, involving cysteine 582, this generates a molecular clamp that secures the ligand via interaction of two copies of OR1 with the knuckle regions of homodimeric BMP-9. In addition to rationalizing a large number of HHT1 mutations, the crystal structure of endoglin shows that the epitope of anti-ENG monoclonal antibody TRC105 overlaps with the binding site for BMP-9.
Interactions
Endoglin has been shown to interact with high affinity to TGF beta receptor 3 and TGF beta receptor 1, and with lower affinity to TGF beta receptor 2. It has high sequence similarity to another TGF beta binding protein, betaglycan, which was one of the first cues that indicated that endoglin is a TGF beta binding proteins. However, it has been shown that TGF beta binds with high affinity to only a small amount of the available endoglin, which suggests that there is another factor regulating this binding.
Endoglin itself doesn't bind the TGF beta ligands, but is present with the TGF beta receptors when the ligand is bound, indicating an important role for endoglin. The full length endoglin will bind to the TGF beta receptor complex whether TGF beta is bound or not, but the truncated forms of endoglin have more specific binding. The amino acid (aa) region 437–558 in the extracellular domain of endoglin will bind to TGF beta receptor II. TGF beta receptor I binds to the 437-588 aa region and to the aa region between 437 and the N-terminus. Unlike TGF beta receptor I which can only bind the cytoplasmic tail when its kinase domain is inactive, TGF beta receptor II can bind endoglin with an inactive and active kinase domain. The kinase is active when it is phosphorylated. Furthermore, TGF beta receptor I will dissociate from endoglin soon after it phosphorylates its cytoplasmic tail, leaving TGF beta receptor I inactive. Endoglin is constituitively phosphorylated at the serine and threonine residues in the cytoplasmic domain. The high interaction between endoglin's cytoplasmic and extracellular tail with the TGF beta receptor complexes indicates an important role for endoglin in the modulation of the TGF beta responses, such as cellular localization and cellular migration.
Endoglin can also mediate F-actin dynamics, focal adhesions, microtubular structures, endocytic vesicular transport through its interaction with zyxin, ZRP-1, beta-arrestin and Tctex2beta, LK1, ALK5, TGF beta receptor II, and GIPC. In one study with mouse fibroblasts, the overexpression of endoglin resulted in a reduction of some ECM components, decreased cellular migration, a change in cellular morphology and intercellular cluster formation.
Function
Endoglin has been found to be an auxiliary receptor for the TGF-beta receptor complex. It thus is involved in modulating a response to the binding of TGF-beta1, TGF-beta3, activin-A, BMP-2, BMP-7 and BMP-9. Beside TGF-beta signaling endoglin may have other functions. It has been postulated that endoglin is involved in the cytoskeletal organization affecting cell morphology and migration.
Endoglin has a role in the development of the cardiovascular system and in vascular remodeling. Its expression is regulated during heart development . Experimental mice without the endoglin gene die due to cardiovascular abnormalities.
Clinical significance
In humans endoglin may be involved in the autosomal dominant disorder known as hereditary hemorrhagic telangiectasia (HHT) type 1. HHT is actually the first human disease linked to the TGF beta receptor complex. This condition leads to frequent nose bleeds, telangiectases on skin and mucosa and may cause arteriovenous malformations in different organs including brain, lung, and liver.
Mutations causing HHT
Some mutations that lead to this disorder are:
a Cytosine (C) to Guanine (G) substitution which converts a tyrosine to stop codon
a 39 base pair deletion
a 2 base pair deletion which creates an early stop codon
Endoglin levels have been found to be elevated in pregnant women who subsequently develop preeclampsia.
Role in cancer
The role of endoglin plays in angiogenesis and the modulation of TGF beta receptor signaling, which mediates cellular localization, cellular migration, cellular morphology, cell proliferation, cluster formation, etc., makes endoglin an important player in tumor growth and metastasis. Being able to target and efficiently reduce or halt neoangiogenesis in tumors would prevent metastasis of primary cancer cells into other areas of the body. Also, it has been suggested that endoglin can be used for tumor imaging and prognosis.
The role of endoglin in cancer can be contradicting at times since it is needed for neoangiogenesis in tumors, which is needed for tumor growth and survival, yet the reduction in expression of endoglin has in many cancers correlated with a negative outcome of that cancer. In breast cancer, for example, the reduction of the full form of endoglin, and the increase of the soluble form of endoglin correlate with metastasis of cancer cells. The TGF beta receptor-endoglin complex relay contradicting signals from TGF beta as well. TGF beta can act as a tumor suppressor in the premalignant stage of the benign neoplasm by inhibiting its growth and inducing apoptosis. However, once the cancer cells have gone through the Hallmarks of Cancer and lost inhibitory growth responses, TGF beta mediates cell invasion, angiogenesis (with the help of endoglin), immune system evasion, and their ECM composition, allowing them to become malignant.
Prostate cancer and endoglin expression
It has been shown that endoglin expression and TGF-beta secretion are attenuated in bone marrow stromal cells when they are cocultured with prostate cancer cells. Also, the downstream TGF-beta/bone morphogenic protein (BMP) signaling pathway, which includes Smad1 and Smad2/3, were attenuated along with Smad-dependent gene transcription. Another result in this study was that both Smad1/5/8-dependent inhibitor of DNA binding 1 expression and Smad2/3-dependent plasminogen activator inhibitor I had a reduction in expression and cell proliferation. Ultimately, the cocultured prostate cancer cells altered the TGF-beta signaling in the bone stromal cells, which suggests this modulation is a mechanism of prostate cancer metastases facilitating their growth and survival in the reactive bone stroma. This study emphasizes the importance of endoglin in TGF-beta signaling pathways in other cell types other than endothelial cells.
As a drug target
TRC105 is an experimental antibody targeted at endoglin as an anti-angiogenesis treatment for soft-tissue sarcoma.
See also
Cluster of differentiation
References
External links
GeneReviews/NCBI/NIH/UW entry on Hereditary Hemorrhagic Telangiectasia
Clusters of differentiation
Growth factors
Signal transduction | Endoglin | [
"Chemistry",
"Biology"
] | 2,533 | [
"Neurochemistry",
"Growth factors",
"Biochemistry",
"Signal transduction"
] |
6,732,652 | https://en.wikipedia.org/wiki/Gibbs%E2%80%93Donnan%20effect | The Gibbs–Donnan effect (also known as the Donnan's effect, Donnan law, Donnan equilibrium, or Gibbs–Donnan equilibrium) is a name for the behaviour of charged particles near a semi-permeable membrane that sometimes fail to distribute evenly across the two sides of the membrane. The usual cause is the presence of a different charged substance that is unable to pass through the membrane and thus creates an uneven electrical charge. For example, the large anionic proteins in blood plasma are not permeable to capillary walls. Because small cations are attracted, but are not bound to the proteins, small anions will cross capillary walls away from the anionic proteins more readily than small cations.
Thus, some ionic species can pass through the barrier while others cannot. The solutions may be gels or colloids as well as solutions of electrolytes, and as such the phase boundary between gels, or a gel and a liquid, can also act as a selective barrier. The electric potential arising between two such solutions is called the Donnan potential.
The effect is named after the American Josiah Willard Gibbs who proposed it in 1878 and the British chemist Frederick G. Donnan who studied it experimentally in 1911.
The Donnan equilibrium is prominent in the triphasic model for articular cartilage proposed by Mow and Lai, as well as in electrochemical fuel cells and dialysis.
The Donnan effect is tactic pressure attributable to cations (Na+ and K+) attached to dissolved plasma proteins.
Example
The presence of a charged impermeant ion (for example, a protein) on one side of a membrane will result in an asymmetric distribution of permeant charged ions. The Gibbs–Donnan equation at equilibrium states (assuming permeant ions are Na+ and Cl−):Equivalently,
Double Donnan
Note that Sides 1 and 2 are no longer in osmotic equilibrium (i.e. the total osmolytes on each side are not the same)
In vivo, ion balance does equilibriate at the proportions that would be predicted by the Gibbs–Donnan model, because the cell cannot tolerate the attendant large influx of water. This is balanced by instating a functionally impermeant cation, Na+, extracellularly to counter the anionic protein. Na+ does cross the membrane via leak channels (the permeability is approximately 1/10 that of K+, the most permeant ion) but, as per the pump-leak model, it is extruded by the Na+/K+-ATPase.
pH change
Because there is a difference in concentration of ions on either side of the membrane, the pH (defined using the relative activity) may also differ when protons are involved. In many instances, from ultrafiltration of proteins to ion exchange chromatography, the pH of the buffer adjacent to the charged groups of the membrane is different from the pH of the rest of the buffer solution. When the charged groups are negative (basic), then they will attract protons so that the pH will be lower than the surrounding buffer. When the charged groups are positive (acidic), then they will repel protons so that the pH will be higher than the surrounding buffer.
Physiological applications
Red blood cells
When tissue cells are in a protein-containing fluid, the Donnan effect of the cytoplasmic proteins is equal and opposite to the Donnan effect of the extracellular proteins. The opposing Donnan effects cause chloride ions to migrate inside the cell, increasing the intracellular chloride concentration. The Donnan effect may explain why some red blood cells do not have active sodium pumps; the effect relieves the osmotic pressure of plasma proteins, which is why sodium pumping is less important for maintaining the cell volume .
Neurology
Brain tissue swelling, known as cerebral oedema, results from brain injury and other traumatic head injuries that can increase intracranial pressure (ICP). Negatively charged molecules within cells create a fixed charge density, which increases intracranial pressure through the Donnan effect. ATP pumps maintain a negative membrane potential even though negative charges leak across the membrane; this action establishes a chemical and electrical gradient.
The negative charge in the cell and ions outside the cell creates a thermodynamic potential; if damage occurs to the brain and cells lose their membrane integrity, ions will rush into the cell to balance chemical and electrical gradients that were previously established. The membrane voltage will become zero, but the chemical gradient will still exist. To neutralize the negative charges within the cell, cations flow in, which increases the osmotic pressure inside relative to the outside of the cell. The increased osmotic pressure forces water to flow into the cell and tissue swelling occurs.
See also
Chemical equilibrium
Nernst equation
Double layer (biology)
Osmotic pressure
Diffusion equilibrium
References
IUPAC Compendium of Chemical Terminology 2nd Edition (1997)
Van C. Mow Basic orthopaedic biomechanics and mechano-biology, 2nd Ed. Lippincott Williams & Wilkins, Philadelphia, 2005
Mapleson W. W. "Computation of the effect of Donnan equilibrium on pH in equilibrium dialysis". Journal of Pharmacological Methods, May 1987.
External links
Gibbs–Donnan effect simulator
Difference between observed and expected oncotic pressure values
Physical chemistry
Colloidal chemistry | Gibbs–Donnan effect | [
"Physics",
"Chemistry"
] | 1,118 | [
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Colloids",
"Surface science",
"nan",
"Physical chemistry"
] |
25,707,847 | https://en.wikipedia.org/wiki/List%20of%20fluvial%20landforms | Landforms related to rivers and other watercourses include:
(watershed)
Fluvial landforms of streams
(Gorge)
See also
Glossary_of_landforms
Glossary_of_landforms#Fluvial_landforms
F
F | List of fluvial landforms | [
"Environmental_science"
] | 51 | [
"Hydrology",
"Hydrology lists"
] |
5,117,528 | https://en.wikipedia.org/wiki/Recombineering | Recombineering (recombination-mediated genetic engineering) is a genetic and molecular biology technique based on homologous recombination systems, as opposed to the older/more common method of using restriction enzymes and ligases to combine DNA sequences in a specified order. Recombineering is widely used for bacterial genetics, in the generation of target vectors for making a conditional mouse knockout, and for modifying DNA of any source often contained on a bacterial artificial chromosome (BAC), among other applications.
Development
Although developed in bacteria, much of the inspiration for recombineering techniques came from methods first developed in Saccharomyces cerevisiae where a linear plasmid was used to target genes or clone genes off the chromosome. In addition, recombination with single-strand oligonucleotides (oligos) was first shown in Saccharomyces cerevisiae. Recombination was observed to take place with oligonucleotides as short as 20 bases.
Recombineering is based on homologous recombination in Escherichia coli mediated by bacteriophage proteins, either RecE/RecT from Rac prophage or Redαβδ from bacteriophage lambda. The lambda Red recombination system is now most commonly used and the first demonstrations of Red in vivo genetic engineering were independently made by Kenan Murphy and Francis Stewart. However, Murphy's experiments required expression of RecA and also employed long homology arms. Consequently, the implications for a new DNA engineering technology were not obvious. The Stewart lab showed that these homologous recombination systems mediate efficient recombination of linear DNA molecules flanked by homology sequences as short as 30 base pairs (40-50 base pairs are more efficient) into target DNA sequences in the absence of RecA. Now the homology could be provided by oligonucleotides made to order, and standard recA cloning hosts could be used, greatly expanding the utility of recombineering.
Recombineering with dsDNA
Recombineering utilizes linear DNA substrates that are either double-stranded (dsDNA) or single-stranded (ssDNA). Most commonly, dsDNA recombineering has been used to create gene replacements, deletions, insertions, and inversions. Gene cloning and gene/protein tagging (His tags etc., see ) is also common. For gene replacements or deletions, usually a cassette encoding a drug-resistance gene is made by PCR using bi-partite primers. These primers consist of (from 5’→3’) 50 bases of homology to the target region, where the cassette is to be inserted, followed by 20 bases to prime the drug resistant cassette. The exact junction sequence of the final construct is determined by primer design. These events typically occur at a frequency of approximately 104/108cells that survive electroporation. Electroporation is the method used to transform the linear substrate into the recombining cell.
Selection/counterselection technique
In some cases, one desires a deletion with no marker left behind, to make a gene fusion, or to make a point mutant in a gene. This can be done with two rounds of recombination. In the first stage of recombineering, a selection marker on a cassette is introduced to replace the region to be modified. In the second stage, a second counterselection marker (e.g. sacB) on the cassette is selected against following introduction of a target fragment containing the desired modification. Alternatively, the target fragment could be flanked by loxP or FRT sites, which could be removed later simply by the expression of the Cre or FLP recombinases, respectively.
A novel selection marker "mFabI" was also developed to increase recombineering efficiency.
Recombineering with ssDNA
Recombineering with ssDNA provided a breakthrough both in the efficiency of the reaction and the ease of making point mutations. This technique was further enhanced by the discovery that by avoiding the methyl-directed mismatch repair system, the frequency of obtaining recombinants can be increased to over 107/108 viable cells. This frequency is high enough that alterations can now be made without selection. With optimized protocols, over 50% of the cells that survive electroporation contain the desired change. Recombineering with ssDNA only requires the Red Beta protein; Exo, Gamma and the host recombination proteins are not required. As proteins homologous to Beta and RecT are found in many bacteria and bacteriophages (>100 as of February 2010), recombineering is likely to work in many different bacteria. Thus, recombineering with ssDNA is expanding the genetic tools available for research in a variety of organisms. To date, recombineering has been performed in E. coli, S. enterica, Y. pseudotuberculosis, S. cerevisiae and M. tuberculosis.
Red-Independent recombination
In the year 2010, it has been demonstrated that ssDNA recombination can occur in the absence of known recombination functions. Recombinants were found at up to 104/108 viable cells. This Red-independent activity has been demonstrated in P. syringae, E. coli, S. enterica serovar typhimurium and S. flexneria.
Applications and benefits of recombineering
The biggest advantage of recombineering is that it obviates the need for conveniently positioned restriction sites, whereas in conventional genetic engineering, DNA modification is often compromised by the availability of unique restriction sites. In engineering large constructs of >100 kb, such as the Bacterial Artificial Chromosomes (BACs), or chromosomes, recombineering has become a necessity. Recombineering can generate the desired modifications without leaving any 'footprints' behind. It also forgoes multiple cloning stages for generating intermediate vectors and therefore is used to modify DNA constructs in a relatively short time-frame. The homology required is short enough that it can be generated in synthetic oligonucleotides and recombination with short oligonucleotides themselves is incredibly efficient. Recently, recombineering has been developed for high throughput DNA engineering applications termed 'recombineering pipelines'. Recombineering pipelines support the large scale production of BAC transgenes and gene targeting constructs for functional genomics programs such as EUCOMM (European Conditional Mouse Mutagenesis Consortium) and KOMP (Knock-Out Mouse Program). Recombineering has also been automated, a process called "MAGE" -Multiplex Automated Genome Engineering, in the Church lab. With the development of CRISPR technologies, construction of CRISPR interference strains in E. coli requires only one-step oligo recombineering, providing a simple and easy-to-implement tool for gene expression control.
"Recombineering tools" and laboratory protocols have also been implemented for a number of plant species. These tools and procedures are customizable, scalable, and freely available to all researchers.
References
External links
redrecombineering.ncifcrf.gov - Details about recombineering as well as protocols, FAQ's and can be used to request strains and plasmids needed for recombineering.
Genetic engineering
Genetics techniques | Recombineering | [
"Chemistry",
"Engineering",
"Biology"
] | 1,558 | [
"Genetics techniques",
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
5,118,041 | https://en.wikipedia.org/wiki/Constraint%20algebra | In theoretical physics, a constraint algebra is a linear space of all constraints and all of their polynomial functions or functionals whose action on the physical vectors of the Hilbert space should be equal to zero.
For example, in electromagnetism, the equation for the Gauss' law
is an equation of motion that does not include any time derivatives. This is why it is counted as a constraint, not a dynamical equation of motion. In quantum electrodynamics, one first constructs a Hilbert space in which Gauss' law does not hold automatically. The true Hilbert space of physical states is constructed as a subspace of the original Hilbert space of vectors that satisfy
In more general theories, the constraint algebra may be a noncommutative algebra.
See also
First class constraints
References
Quantum mechanics
Quantum field theory
String theory | Constraint algebra | [
"Physics",
"Astronomy"
] | 166 | [
"Quantum field theory",
"Astronomical hypotheses",
"Theoretical physics",
"Quantum mechanics",
"String theory",
"Quantum physics stubs"
] |
5,120,902 | https://en.wikipedia.org/wiki/Perindopril | Perindopril is a medication used to treat high blood pressure, heart failure, or stable coronary artery disease. As a long-acting ACE inhibitor, it works by relaxing blood vessels and decreasing blood volume. As a prodrug, perindopril is hydrolyzed in the liver to its active metabolite, perindoprilat. It was patented in 1980 and approved for medical use in 1988.
Perindopril is taken in the form of perindopril arginine (with arginine, brand names include Coversyl, Coversum) or perindopril erbumine (with erbumine (tert-Butylamine), brand name Aceon). Both forms are therapeutically equivalent and interchangeable, but the dose prescribed to achieve the same effect differs between the two forms.
In Australia, it was one of the top 10 most prescribed medications between 2017 and 2023.
Medical uses
Perindopril shares the indications of ACE inhibitors as a class, including essential hypertension, stable coronary artery disease (reduction of risk of cardiac events in patients with a history of myocardial infarction and/or revascularization), treatment of symptomatic coronary artery disease or heart failure, and diabetic nephropathy.
Combination therapy
With indapamide
In combination with indapamide, perindopril has been shown to significantly reduce the progression of chronic kidney disease and renal complications in patients with type 2 diabetes. In addition, the Perindopril pROtection aGainst REcurrent Stroke Study (PROGRESS) found that whilst perindopril monotherapy demonstrated no significant benefit in reducing recurrent strokes when compared to placebo, the addition of low dose indapamide to perindopril therapy was associated with larger reductions in both blood pressure lowering and recurrent stroke risk in patients with pre-existing cerebrovascular disease, irrespective of their blood pressure. There is evidence to support the use of perindopril and indapamide combination over perindopril monotherapy to prevent strokes and improve mortality in patients with a history of stroke, transient ischaemic attack or other cardiovascular disease.
With amlodipine
The Anglo-Scandinavian Cardiac Outcomes Trial-Blood Pressure Lowering Arm (ASCOT-BLA) was a 2005 landmark trial that compared the effects of the established therapy of the combination of atenolol and bendroflumethiazide to the new drug combination of amlodipine and perindopril (trade names Viacoram, AceryCal etc.). The study of more than 19 000 patients world-wide was terminated earlier than anticipated because it clearly demonstrated a statistically significant improvement in mortality and cardiovascular outcomes with the newer treatment. The combination of amlodipine and perindopril remains in the current treatment guidelines for hypertension and the outcomes of the ASCOT-BLA trial paved the way for further research into combination therapy and newer agents.
Contraindications
Children
Pregnancy
Lactation
Situations where a patient has a history of hypersensitivity
Kidney failure
Precautions
Assess kidney function before and during treatment where appropriate.
Renovascular hypertension
Surgery/anesthesia
An analysis on the PROGRESS trial showed that perindopril has key benefits in reducing cardiovascular events by 30% in patients with chronic kidney disease defined as a CrCl <60ml/min. A 2016 and 2017 meta-analysis review looking at ACE inhibitors demonstrated a reduction in cardiovascular events but also slowed the decline of renal failure by 39% when compared to placebo. These studies included patients with moderate to severe kidney disease and those on dialysis.
Its renoprotective benefits of decreasing blood pressure and removing filtration pressure is highlighted in a 2016 review. ACE inhibitor can result in an initial increase of serum creatinine, but mostly returns to baseline in a few weeks in majority of patients. It has been suggested that increased monitoring, especially in advanced kidney failure, will minimise any related risk and improve long-term benefits.
Use cautiously in patients with sodium or volume depletion due to potential excessive hypotensive effects of renin-angiotensin blockade causing symptomatic hypotension. Careful monitoring or short-term dose reduction of diuretics prior to commencing perindopril is recommended to prevent this potential effect. A diuretic may later be given in combination if necessary; potassium-sparing diuretics are not recommended in combination with perindopril due to the risk of hyperkalaemia.
Combination with neuroleptics or imipramine-type drugs may increase the blood pressure lowering effect. Serum lithium concentrations may rise during lithium therapy.
Side effects
Side effects are mild, usually at the start of treatment; they include:
Cough
Fatigue
Weakness/Asthenia
Headache
Disturbances of mood and/or sleep
Less often
Taste impairment
Epigastric discomfort
Nausea
Abdominal pain
Rash
Reversible increases in blood urea and creatinine may be observed. Proteinuria has occurred in some patients. Rarely, angioneurotic edema and decreases in hemoglobin, red cells, and platelets have been reported.
Composition
Each tablet contains 2, 4, or 8 mg of the tert-butylamine salt of perindopril. Perindopril is also available under the trade name Coversyl Plus, containing 4 mg of perindopril combined with 1.25 mg indapamide, a thiazide-like diuretic.
In Australia, each tablet contains 2.5, 5, or 10 mg of perindopril arginine. Perindopril is also available under the trade name Coversyl Plus, containing 5 mg of perindopril arginine combined with 1.25 mg indapamide and Coversyl Plus LD, containing 2.5 mg of perindopril arginine combined with 0.625 mg indapamide.
The efficacy and tolerability of a fixed-dose combination of 4 mg perindopril and 5 mg amlodipine, a calcium channel antagonist, has been confirmed in a prospective, observational multicenter trial of 1,250 hypertensive patients. A preparation of the two drugs is available commercially as Coveram.
Society and culture
Brand names
Perindopril is available under the following brand names among others:
Marketing
In July 2014, the European Commission imposed fines of on Laboratoires Servier and five companies which produce generics due to Servier's abuse of their dominant market position, in breach of European Union Competition law. Servier's strategy included acquiring the principal source of generic production of perindopril and entering into several pay-for-delay agreements with potential generic competitors.
References
Further reading
External links
ACE inhibitors
Alpha-Amino acids
Carboxamides
Enantiopure drugs
Ethyl esters
Indoles
Laboratoires Servier
Prodrugs
Propyl compounds
Secondary amino acids | Perindopril | [
"Chemistry"
] | 1,421 | [
"Chemicals in medicine",
"Stereochemistry",
"Enantiopure drugs",
"Prodrugs"
] |
5,121,058 | https://en.wikipedia.org/wiki/Grob%20fragmentation | A Grob fragmentation is an elimination reaction that breaks a neutral aliphatic chain into three fragments: a positive ion spanning atoms 1 and 2 (the "electrofuge"), an unsaturated neutral fragment spanning positions 3 and 4, and a negative ion (the "nucleofuge") comprising the rest of the chain.
For example, the positive ion may be a carbenium, carbonium or acylium ion; the neutral fragment could be an alkene, alkyne, or imine; and the negative fragment could be a tosyl or hydroxyl ion:
The reaction is named for the Swiss chemist .
Alternately, atom 1 could begin as an anion, in which case it becomes neutral rather than going from neutral to cationic.
History
An early instance of fragmentation is the dehydration of di(tert-butyl)methanol yielding 2-methyl-2-butene and isobutene, a reaction described in 1933 by Frank C. Whitmore. This reaction proceeds by formation of a secondary carbocation followed by a rearrangement reaction to a more stable tertiary carbocation and elimination of a t-butyl cation:
Albert Eschenmoser in 1952 investigated the base catalysed fragmentation of certain beta hydroxy ketones:
The original work by Grob (1955) concerns the formation of 1,5-hexadiene from cis- or trans-1,4-dibromocyclohexane by sodium metal:
According to reviewers Prantz and Mulzer (2010), the name Grob fragmentation was chosen "in more or less glaring disregard of the earlier contributions".
Reaction mechanism
The reaction mechanism varies with reactant and reaction conditions with the fragmentation taking place in a concerted reaction or taking place in two steps with a carbocationic intermediate when the nucleofuge leaves first or taking place in two steps with an anionic intermediate when the electrofuge leaves first. The carbanionic pathway is more common and is facilitated by the stability of the cation formed and the leaving group ability of the nucleofuge. With cyclic substrates, the preferred geometry of elimination is for the sigma bond that drives out the leaving group to being anti to it, analogous to the conformational orientation in the E2 mechanism of elimination reactions.
Examples
Thapsigargin from Wieland–Miescher ketone
An example of a Grob-like fragmentation in organic synthesis is the expansion of the Wieland–Miescher ketone to thapsigargin:
In this reaction, diastereoselective reduction of the ketone 1 with sodium borohydride yields alcohol 2, which is functionalized to the mesylate 3 with mesyl chloride in pyridine. The selectivity of the initial reduction of ketone 1 is a result of borohyride approaching from the bottom face to avoid steric clash with the axial methyl group. Then reduction of the enone to allyl alcohol 4 with tri-tert-butoxyaluminium hydride in tetrahydrofuran followed by hydroboration with borane in THF yields the borane 5 (only one substituent displayed for clarity). The diastereoselectivity of the hydroboration is a result of two factors: avoidance of the axial methyl group as well as axial hydride addition to avoid a twist-boat conformation in the transition state. The Grob fragmentation to 6 takes place with sodium methoxide in methanol at reflux. A methoxide group attacks the boron atom giving a borate complex which fragments. As each boron atom can hold three substrate molecules (R), the ultimate boron byproduct is trimethyl borate. As seen in 6, the mesylate being in the equatorial position allows its sigma star orbital to align ideally with the sigma bond drawn, allowing for the correct olefin geometry seen in 7.
Another example is an epoxy alcohol fragmentation reaction as part of the Holton Taxol total synthesis.
aza-Grob fragmentation
3-aza-Grob fragmentation is variation which takes place when an electrofuge and nucleofuge are situated at positions 1 and 5 on a secondary or tertiary amine chain with the nitrogen at the 3 position. The reaction products are an electrofugal fragment, an imine, and a nucleofugal fragment (such as an alcohol).
3-aza-Grob fragmentation can proceed with several different nucleofuges. The reaction mechanism has been reported to begin with the reduction of an ether protected amide to form a secondary alcohol. Fragmentation then takes place in a concerted step to form the reaction products.
The scope of the reaction has been found to cover THF and tetrahydrothiophene protecting groups using various hydride agents.
See also
Eschenmoser fragmentation
Wharton reaction
References
Elimination reactions
Name reactions | Grob fragmentation | [
"Chemistry"
] | 1,034 | [
"Name reactions"
] |
5,121,157 | https://en.wikipedia.org/wiki/Sodium%20methoxide | Sodium methoxide is the simplest sodium alkoxide. With the formula , it is a white solid, which is formed by the deprotonation of methanol. It is a widely used reagent in industry and the laboratory. It is also a dangerously caustic base.
Preparation and structure
Sodium methoxide is prepared by treating methanol with sodium:
The reaction is so exothermic that ignition is possible. The resulting solution, which is colorless, is often used as a source of sodium methoxide, but the pure material can be isolated by evaporation followed by heating to remove residual methanol.
As a solid, sodium methoxide is polymeric, with sheet-like arrays of centers, each bonded to four oxygen centers.
The structure, and hence the basicity, of sodium methoxide in solution depends on the solvent. It is a significantly stronger base in DMSO where it is more fully ionized and free of hydrogen bonding.
Applications
Organic synthesis
Sodium methoxide is a routinely used base in organic chemistry, applicable to the synthesis of numerous compounds ranging from pharmaceuticals to agrichemicals. As a base, it is employed in dehydrohalogenations and various condensations. It is also a nucleophile for the production of methyl ethers.
Industrial applications
Sodium methoxide is used as an initiator of anionic addition polymerization with ethylene oxide, forming a polyether with high molecular weight. Biodiesel is prepared from vegetable oils and animal fats (fatty acid triglycerides) by transesterification with methanol to give fatty acid methyl esters (FAMEs). Sodium methoxide acts as a catalyst for this reaction, but will combine with any free fatty acids present in the oil/fat feedstock to form soap byproducts.
Stability
The solid hydrolyzes in water to give methanol and sodium hydroxide. Indeed, samples of sodium methoxide are often contaminated with sodium hydroxide, which is difficult to detect. The compound absorbs carbon dioxide from the air to form methanol and sodium carbonate, thus diminishing the alkalinity of the base.
Commercial batches of sodium methoxide show variable levels of degradation, and were a major source of irreproducibility when used in Suzuki reactions.
Safety
Sodium methoxide is highly caustic and reacts with water to give methanol, which is toxic and volatile.
NFPA 704
The ratings for this substance vary widely.
See also
Methoxide
Biodiesel production
Sodium ethoxide
References
Alkoxides
Organic sodium salts
Organic compounds with 1 carbon atom | Sodium methoxide | [
"Chemistry"
] | 555 | [
"Alkoxides",
"Functional groups",
"Salts",
"Organic compounds",
"Organic sodium salts",
"Bases (chemistry)",
"Organic compounds with 1 carbon atom"
] |
5,122,241 | https://en.wikipedia.org/wiki/Displacement%20operator | In the quantum mechanics study of optical phase space, the displacement operator for one mode is the shift operator in quantum optics,
,
where is the amount of displacement in optical phase space, is the complex conjugate of that displacement, and and are the lowering and raising operators, respectively.
The name of this operator is derived from its ability to displace a localized state in phase space by a magnitude . It may also act on the vacuum state by displacing it into a coherent state. Specifically,
where is a coherent state, which is an eigenstate of the annihilation (lowering) operator.
Properties
The displacement operator is a unitary operator, and therefore obeys
,
where is the identity operator. Since , the hermitian conjugate of the displacement operator can also be interpreted as a displacement of opposite magnitude (). The effect of applying this operator in a similarity transformation of the ladder operators results in their displacement.
The product of two displacement operators is another displacement operator whose total displacement, up to a phase factor, is the sum of the two individual displacements. This can be seen by utilizing the Baker–Campbell–Hausdorff formula.
which shows us that:
When acting on an eigenket, the phase factor appears in each term of the resulting state, which makes it physically irrelevant.
It further leads to the braiding relation
Alternative expressions
The Kermack-McCrae identity gives two alternative ways to express the displacement operator:
Multimode displacement
The displacement operator can also be generalized to multimode displacement. A multimode creation operator can be defined as
,
where is the wave vector and its magnitude is related to the frequency according to . Using this definition, we can write the multimode displacement operator as
,
and define the multimode coherent state as
.
See also
Optical phase space
References
Quantum optics | Displacement operator | [
"Physics"
] | 374 | [
"Quantum optics",
"Quantum mechanics"
] |
5,122,435 | https://en.wikipedia.org/wiki/Flexibility%20%28engineering%29 | Flexibility is used as an attribute of various types of systems. In the field of engineering systems design, it refers to designs that can adapt when external changes occur. Flexibility has been defined differently in many fields of engineering, architecture, biology, economics, etc. In the context of engineering design one can define flexibility as the ability of a system to respond to potential internal or external changes affecting its value delivery, in a timely and cost-effective manner. Thus, flexibility for an engineering system is the ease with which the system can respond to uncertainty in a manner to sustain or increase its value delivery. Uncertainty is a key element in the definition of flexibility. Uncertainty can create both risks and opportunities in a system, and it is with the existence of uncertainty that flexibility becomes valuable.
Flexible Manufacturing System
Flexibility has been especially thoroughly studied for manufacturing systems. For manufacturing science eleven different classes of flexibility have been identified [Browne, 1984], [Sethi and Sethi, 1990]:
Machine flexibility - The different operation types that a machine can perform.
Material handling flexibility - The ability to move the products within a manufacturing facility.
Operation flexibility - The ability to produce a product in different ways.
Process flexibility - The set of products that the system can produce.
Product flexibility - The ability to add new products in the system.
Routing flexibility - The different routes (through machines and workshops) that can be used to produce a product in the system.
Volume flexibility - The ease to profitably increase or decrease the output of an existing system. At firm level, it is the ability of a firm to operate profitably at different output levels. Firms often use volume flexibility as a benchmark to assess their performance vis-à-vis their competitors.
Expansion flexibility - The ability to build out the capacity of a system.
Program flexibility - The ability to run a system automatically.
Production flexibility - The number of products a system currently can produce.
Market flexibility - The ability of the system to adapt to market demands.
These definitions yield under current conditions of the system and that no major setups are conducted or investments are made (except expansion flexibility). Many of the flexibility types are linked to each other; increasing one flexibility type also increases another. But in some cases tradeoffs between two flexibility types are needed.
Bibliography
Browne, J. et al. "Classification of flexible manufacturing systems", The FMS Magazine 1984 April, 114–117.
Sethi, A.K. and Sethi, S.P. "Flexibility in Manufacturing: A survey", The International Journal of Flexible Manufacturing Systems 1990 2, 289–328.
References
Structural engineering
Software quality | Flexibility (engineering) | [
"Engineering"
] | 526 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
28,856,940 | https://en.wikipedia.org/wiki/Instituto%20de%20Plasmas%20e%20Fus%C3%A3o%20Nuclear | Instituto de Plasmas e Fusão Nuclear (IPFN) (Institute for Plasmas and Nuclear Fusion) is a research unit of Instituto Superior Técnico (IST), Lisbon, and a leading Portuguese institution in physics research. IPFN has the status of Associate Laboratory in the thematic areas of controlled nuclear fusion, plasma technologies and intense lasers, granted by the Portuguese Foundation for Science and Technology.
IPFN was formally created in January 2008, as a result of the merging between the former research units Center for Nuclear Fusion and Center for Plasma Physics. As of 2015, almost 190 people work at IPFN, of which more than 100 are PhDs.
Organization
IPFN is organized in seven research groups: Engineering and Systems Integration, Experimental Physics, Materials Processing and Characterisation, Theory and Modelling, Lasers and Plasmas, Gas Discharges and Gaseous Electronics, and High Pressure Plasmas. The activities in the frame of the Associate Laboratory are evaluated by an External Evaluation Committee. IPFN is also the research unit of the Contract of Association between EURATOM and IST, in force since 1990. These activities are coordinated by the Head of Research Unit and monitored by a steering committee.
Main research fields
IPFN activities are centered on the following competences:
Magnetic Confinement Fusion Devices
Fusion Engineering Systems
Fusion Theory and Modelling
Inertial fusion
Laser-Plasma Accelerators
High-Performance Computing
Relativistic Astrophysics
Novel Radiation Sources
Ultra Intense Photonics
Space Physics
Environmental Plasma Engineering Laboratory
Kinetics in Plasmas and Afterglows
Modelling of Plasma Sources
Quantum Plasmas
See also
Instituto Superior Técnico
References
External links
Instituto de Plasmas e Fusão Nuclear (official page)
Research institutes in Portugal
Fusion power
Plasma physics facilities
Physics research institutes
Tokamaks
University of Lisbon | Instituto de Plasmas e Fusão Nuclear | [
"Physics",
"Chemistry"
] | 361 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics facilities",
"Plasma physics"
] |
30,313,921 | https://en.wikipedia.org/wiki/Polygon%20experiment | The POLYGON experiment was a pioneer experiment in oceanography conducted in the middle of the Atlantic Ocean during the 1970s. The experiment, led by Leonid Brekhovskikh, was the first to establish the existence of so-called mesoscale eddies, eddies at the and 100-day scale, which triggered the "mesoscale revolution". The existence of mesoscale eddies was predicted by Henry Stommel in the 1960s, but there was no way to observe them with traditional sampling methods.
Setup and results
POLYGON was led by Leonid Brekhovskikh, from the Andreev Acoustics Institute, involving six research vessels and an extensive network of current meters. The flow meters were disposed in a cross, spanning a region of 113 by 113 nautical miles dubbed the "polygon". The experiment recorded temperature and flow, replacing the meters every 25 days, while taking care that the replacements would not create gaps in the data. The research vessels involved were the Akademik Kurchatov, the Dmitri Mendeleev, the Andrei Vil'kitskii, the Akademik Vernadskii, the Sergei Vavilov and the Pyotr Lebedev.
Of the results, Brekhovskikh wrote in original breakthrough article "Even with somewhat less sophisticated gear than was desirable, the results... exceeded all expectations in terms of ... the significance of the scientific results obtained. Undoubtedly the experience... will be very useful in the preparation for the forthcoming international campaign MODE... It looks as though some largescale eddy or wave disturbances were travelling across the POLYGON site from east to west. Their scales were close to those of the planetary baroclinic Rossby waves..."
Follow up
POLYGON was followed by the MODE experiment (Mid Ocean Dynamics Experiment) led by Henry Stommel, and the POLYMODE experiment by Andrei Monin. Walter Munk commented that the POLYGON experiment "ignited the mesoscale revolution [and that] MODE defined the new order" and that "oceanography has never been the same" since.
Notes
References
Further reading
Oceanography
Physics experiments
Science and technology in the Soviet Union
1970s in the Soviet Union | Polygon experiment | [
"Physics",
"Environmental_science"
] | 463 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Physics experiments",
"Oceanography",
"Experimental physics"
] |
30,319,597 | https://en.wikipedia.org/wiki/CANDLE%20Synchrotron%20Research%20Institute | The Center for the Advancement of Natural Discoveries using Light Emission, more commonly called the CANDLE Synchrotron Research Institute, is a research institute in Yerevan, Armenia. CANDLE is a project of 3 gigaelectronvolts of energy, a third generation synchrotron light source for fundamental, industrial and applied research in biology, physics, chemistry, medicine, material and environmental sciences.
The government of Armenia allocated an area of 20 hectares near the town of Abovyan for the center's projects.
References
External links
Official website
CANDLE Project Overview, V.Tsakanov, Proc. PAC'2005, Knoxville, Tennessee.
Educational institutions established in 2010
Education in Yerevan
Synchrotron radiation facilities
2010 establishments in Armenia | CANDLE Synchrotron Research Institute | [
"Materials_science"
] | 148 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
27,579,471 | https://en.wikipedia.org/wiki/Electrical%20network%20frequency%20analysis | Electrical network frequency (ENF) analysis is an audio forensics technique for validating audio recordings by comparing frequency changes in background mains hum in the recording with long-term high-precision historical records of mains frequency changes from a database. In effect the mains hum signal is treated as if it were a time-dependent digital watermark that can help identify when the recording was created, detect edits in the recording, or disprove tampering of a recording. Historical records of mains frequency changes are kept on record, e.g., by police in the German federal state of Bavaria since 2010 and the United Kingdom Metropolitan Police since 2005.
The technology has been hailed as "the most significant development in audio forensics since Watergate." However, according to a paper by Huijbregtse and Geradts, the ENF technique, although powerful, has significant limitations caused by ambiguity based on fixed frequency offsets during recording, and self-similarity within the mains frequency database, particularly for recordings shorter than 10 minutes.
More recently, researchers demonstrated that indoor lights such as fluorescent lights and incandescent bulbs vary their light intensity in accordance with the voltage supplied, which in turn depends on the voltage supply frequency. As a result, the light intensity can carry the frequency fluctuation information to the visual sensor recordings in a similar way as the electromagnetic waves from the power transmission lines carry the ENF information to audio sensing mechanisms. Based on this result, researchers demonstrated that visual track from still video taken in indoor lighting environments also contain ENF traces that can be extracted by estimating the frequency at which ENF will appear in a video as low sampling frequency of video (25–30 Hz) cause significant aliasing. It was also demonstrated in the same research that the ENF signatures from the visual stream and the ENF signature from the audio stream in a given video should match. As a result, the matching between the two signals can be used to determine if the audio and visual track were recorded together or superimposed later.
Use by law enforcement
The distinctive electrical hums have been used to provide forensic verification of audio recordings, a process fully automated in the United Kingdom.
References
External links
The hidden background noise that can catch criminals (Tom Scott/YouTube)
Electric power
Sound recording
Forensic techniques | Electrical network frequency analysis | [
"Physics",
"Engineering"
] | 469 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
27,582,895 | https://en.wikipedia.org/wiki/Magnetic%20catalysis | Magnetic catalysis is a physics phenomenon, which is defined as an enhancement of dynamical symmetry breaking by an external magnetic field in quantum field theory, used for the description of quantum (quasi-)particles in particle physics, nuclear physics and condensed matter physics. The underlying phenomenon is a consequence of the strong tendency of a magnetic field to enhance binding of oppositely charged particles into bound states. The catalyzing effect comes from a partial restriction (dimensional reduction) of the motion of charged particles in the directions perpendicular to the direction of the magnetic field.
Commonly, the magnetic catalysis is specifically associated with spontaneous breaking of flavor or chiral symmetry in quantum field theory, which is enhanced or triggered by the presence of an external magnetic field.
General description
The underlying mechanism behind magnetic catalysis is the dimensional reduction of low-energy charged spin-1/2 particles. As a result of such a reduction, there exists a strong enhancement of the particle-antiparticle pairing responsible for symmetry breaking. For gauge theories in 3+1 space-time dimensions, such as quantum electrodynamics and quantum chromodynamics, the dimensional reduction leads to an effective (1+1)-dimensional low-energy dynamics. (Here the dimensionality of space-time is written as D+1 for D spatial directions.) In simple terms, the dimensional reduction reflects the fact that the motion of charged particles is (partially) restricted in the two space-like directions perpendicular to the magnetic field. However, this orbital motion constraint alone is not sufficient (for example, there is no dimensional reduction for charged scalar particles, carrying spin 0, although their orbital motion is constrained in the same way.) It is also important that the fermions have spin 1/2 and, as follows from the Atiyah–Singer index theorem, their lowest Landau level states have an energy independent of the magnetic field. (The corresponding energy vanishes in the case of massless particles.) This is in contrast to the energies in the higher Landau levels, which are proportional to the square root of the magnetic field. Therefore, if the field is sufficiently strong, only the lowest Landau level states are dynamically accessible at low energies. The states in the higher Landau levels decouple and become almost irrelevant. The phenomenon of magnetic catalysis has applications in particle physics, nuclear physics and condensed matter physics.
Applications
Chiral symmetry breaking in quantum chromodynamics
In the theory of quantum chromodynamics, magnetic catalysis can be applied when quark matter is subject to extremely strong magnetic fields. Such strong magnetic fields can lead to more pronounced effects of chiral symmetry breaking, e.g., lead to (i) a larger value of the chiral condensate, (ii) a larger dynamical (constituent) mass of quarks, (iii) larger baryon masses, (iv) modified pion decay constant, etc. Recently, there was an increased activity to cross-check the effects of magnetic catalysis in the limit of a large number of colors, using the technique of AdS/CFT correspondence.
Quantum Hall effect in graphene
The idea of magnetic catalysis can be used to explain the observation of new quantum Hall plateaus in graphene in strong magnetic fields beyond the standard anomalous sequence at filling factors ν=4(n+½) where n is an integer. The additional quantum Hall plateaus develop at ν=0, ν=±1, ν=±3 and ν=±4.
The mechanism of magnetic catalysis in a relativistic-like planar systems such as graphene is very natural. In fact, it was originally proposed for a 2+1 dimensional model, which is almost the same as the low-energy effective theory of graphene written in terms of massless Dirac fermions. In application to a single layer of graphite (i.e., graphene), magnetic catalysis triggers the breakdown of an approximate internal symmetry and, thus, lifts the 4-fold degeneracy of Landau levels. It can be shown to occur for relativistic massless fermions with weak repulsive interactions.
References
Quantum field theory | Magnetic catalysis | [
"Physics"
] | 863 | [
"Quantum field theory",
"Quantum mechanics"
] |
24,303,557 | https://en.wikipedia.org/wiki/C40H56O2 | {{DISPLAYTITLE:C40H56O2}}
The molecular formula C40H56O2 (molar mass: 568.87 g/mol) may refer to:
Lutein
Zeaxanthin
Meso-zeaxanthin (3R,3´S-zeaxanthin)
Molecular formulas | C40H56O2 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,303,708 | https://en.wikipedia.org/wiki/C24H28O2 | {{DISPLAYTITLE:C24H28O2}}
The molecular formula C24H28O2 (molar mass: 348.48 g/mol, exact mass: 348.2089 u) may refer to:
Bexarotene (Targretin)
Machaeriol A
Perrottetinene
SC-4289
Molecular formulas | C24H28O2 | [
"Physics",
"Chemistry"
] | 78 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,304,016 | https://en.wikipedia.org/wiki/G.James%20Glass%20%26%20Aluminium | G.James Glass & Aluminium is an Australian industrial company, producing glass and aluminium products. G.James is a major Australian glass processor, aluminium window and door fabricator & contractor and production of extruded aluminium profiles.
Company history
The origin of the G.James Group of Companies began in 1913 when an enterprising George James arrived in Australia after migrating from England. After working for various building related companies in Brisbane, George decided to use his skills as a glazier, along with his knowledge of sales and purchasing, to establish G.James Glass Merchants at West End (Brisbane) in 1917.
Initially the business was based on buying cases of glass and selling it cut-to-size to timber joiners in Queensland and New South Wales. Upon George's death in 1958 his son-in-law Joseph (Joe) Saragossi, together with his wife Pearle and sister-in-law Gertie Baratin, founded a private company in 1959.
In 2006, the company won an Australian Window Association (AWA) Design Award for the Best Use of Windows and Doors.
References
External links
G.James Glass & Aluminium website
Aluminium companies of Australia
Glassmaking companies
Manufacturing companies based in Brisbane
Australian brands | G.James Glass & Aluminium | [
"Materials_science",
"Engineering"
] | 248 | [
"Glass engineering and science",
"Glassmaking companies",
"Engineering companies"
] |
31,437,791 | https://en.wikipedia.org/wiki/Plasma%20deep%20drilling%20technology | Plasma deep drilling technology is one of several drilling technologies that may be able to replace conventional, contact-based rotary systems. These new technologies include plasma deep drilling, water jet, hydrothermal spallation and laser. Companies that embrace plasma-drilling method include GA Drilling, headquartered in Bratislava, Slovakia.
High-energy plasma
High-energy plasma is a technology that targets deep drilling applications. It addresses issues related to drilling in water environments or boreholes with varying diameters.
Physical principle of electrical plasma
An electric arc is a breakdown of a gas that produces a plasma discharge, resulting from a current flowing through normally nonconductive media such as air or another gas. An arc discharge is characterized by a lower voltage than a glow discharge, and relies on thermionic emission of electrons from the electrodes supporting the arc. The electric arc is influenced by factors such as: the gas flow, inner and outer magnetic fields, and construction elements of the chamber that confines the arc. The development of plasma torches to be used as a source of the thermal plasma demands a deep understanding of the discharge chamber processes.
Advantages
Higher drilling energy efficiency
Continuous drilling without replacement of mechanical parts
Constant casing diameter
Effective transport of disintegrated rock
See also
GA Drilling
New drilling technologies
Drilling rig
Quaise
Oil well
Research Centre for Deep Drilling
References
Massachusetts Institute of Technology (2006) "The Future of Geothermal Energy"
Celim Slovakia (2011) "Arc Discharge, Plasma Torch (different approaches)"
Pierce, K.G., Livesay, B.J., Finger J.T. (1996) "Advanced Drilling System Study"
Drilling technology
Geothermal drilling
Geothermal energy
Mining engineering
Plasma technology and applications | Plasma deep drilling technology | [
"Physics",
"Engineering"
] | 340 | [
"Plasma technology and applications",
"Mining engineering",
"Plasma physics"
] |
31,440,639 | https://en.wikipedia.org/wiki/Braunstein%E2%80%93Ghosh%E2%80%93Severini%20entropy | In network theory, the Braunstein–Ghosh–Severini entropy (BGS entropy) of a network is the von Neumann entropy of a density matrix given by a normalized Laplacian matrix of the network. This definition of entropy does not have a clear thermodynamical interpretation. The BGS entropy has been used in the context of quantum gravity.
Notes and references
Quantum mechanical entropy | Braunstein–Ghosh–Severini entropy | [
"Physics"
] | 84 | [
"Quantum mechanical entropy",
"Entropy",
"Physical quantities"
] |
545,288 | https://en.wikipedia.org/wiki/Spectral%20leakage | The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as a frequency spectrum. Any linear time-invariant operation on s(t) produces a new spectrum of the form H(f)•S(f), which changes the relative magnitudes and/or angles (phase) of the non-zero values of S(f). Any other type of operation creates new frequency components that may be referred to as spectral leakage in the broadest sense. Sampling, for instance, produces leakage, which we call aliases of the original spectral component. For Fourier transform purposes, sampling is modeled as a product between s(t) and a Dirac comb function. The spectrum of a product is the convolution between S(f) and another function, which inevitably creates the new frequency components. But the term 'leakage' usually refers to the effect of windowing, which is the product of s(t) with a different kind of function, the window function. Window functions happen to have finite duration, but that is not necessary to create leakage. Multiplication by a time-variant function is sufficient.
Spectral analysis
The Fourier transform of the function is zero, except at frequency ±ω. However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their spectral content only during a certain time period. In either case, the Fourier transform (or a similar transform) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method.
The effects are most easily characterized by their effect on a sinusoidal s(t) function, whose unwindowed Fourier transform is zero for all but one frequency. The customary frequency of choice is 0 Hz, because the windowed Fourier transform is simply the Fourier transform of the window function itself (see ):
When both sampling and windowing are applied to s(t), in either order, the leakage caused by windowing is a relatively localized spreading of frequency components, with often a blurring effect, whereas the aliasing caused by sampling is a periodic repetition of the entire blurred spectrum.
Choice of window function
Windowing of a simple waveform like causes its Fourier transform to develop non-zero values (commonly called spectral leakage) at frequencies other than ω. The leakage tends to be worst (highest) near ω and least at frequencies farthest from ω.
If the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with our ability to distinguish them spectrally. Possible types of interference are often broken down into two opposing classes as follows: If the component frequencies are dissimilar and one component is weaker, then leakage from the stronger component can obscure the weaker one's presence. But if the frequencies are too similar, leakage can render them unresolvable even when the sinusoids are of equal strength. Windows that are effective against the first type of interference, namely where components have dissimilar frequencies and amplitudes, are called high dynamic range. Conversely, windows that can distinguish components with similar frequencies and amplitudes are called high resolution.
The rectangular window is an example of a window that is high resolution but low dynamic range, meaning it is good for distinguishing components of similar amplitude even when the frequencies are also close, but poor at distinguishing components of different amplitude even when the frequencies are far away. High-resolution, low-dynamic-range windows such as the rectangular window also have the property of high sensitivity, which is the ability to reveal relatively weak sinusoids in the presence of additive random noise. That is because the noise produces a stronger response with high-dynamic-range windows than with high-resolution windows.
At the other extreme of the range of window types are windows with high dynamic range but low resolution and sensitivity. High-dynamic-range windows are most often justified in wideband applications, where the spectrum being analyzed is expected to contain many different components of various amplitudes.
In between the extremes are moderate windows, such as Hann and Hamming. They are commonly used in narrowband applications, such as the spectrum of a telephone channel.
In summary, spectral analysis involves a trade-off between resolving comparable strength components with similar frequencies (high resolution / sensitivity) and resolving disparate strength components with dissimilar frequencies (high dynamic range). That trade-off occurs when the window function is chosen.
Discrete-time signals
When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function and then a discrete Fourier transform (DFT). But the DFT provides only a sparse sampling of the actual discrete-time Fourier transform (DTFT) spectrum. Figure 2, row 3 shows a DTFT for a rectangularly-windowed sinusoid. The actual frequency of the sinusoid is indicated as "13" on the horizontal axis. Everything else is leakage, exaggerated by the use of a logarithmic presentation. The unit of frequency is "DFT bins"; that is, the integer values on the frequency axis correspond to the frequencies sampled by the DFT. So the figure depicts a case where the actual frequency of the sinusoid coincides with a DFT sample, and the maximum value of the spectrum is accurately measured by that sample. In row 4, it misses the maximum value by bin, and the resultant measurement error is referred to as scalloping loss (inspired by the shape of the peak). For a known frequency, such as a musical note or a sinusoidal test signal, matching the frequency to a DFT bin can be prearranged by choices of a sampling rate and a window length that results in an integer number of cycles within the window.
Noise bandwidth
The concepts of resolution and dynamic range tend to be somewhat subjective, depending on what the user is actually trying to do. But they also tend to be highly correlated with the total leakage, which is quantifiable. It is usually expressed as an equivalent bandwidth, B. It can be thought of as redistributing the DTFT into a rectangular shape with height equal to the spectral maximum and width B. The more the leakage, the greater the bandwidth. It is sometimes called noise equivalent bandwidth or equivalent noise bandwidth, because it is proportional to the average power that will be registered by each DFT bin when the input signal contains a random noise component (or is just random noise). A graph of the power spectrum, averaged over time, typically reveals a flat noise floor, caused by this effect. The height of the noise floor is proportional to B. So two different window functions can produce different noise floors, as seen in figures 1 and 3.
Processing gain and losses
In signal processing, operations are chosen to improve some aspect of quality of a signal by exploiting the differences between the signal and the corrupting influences. When the signal is a sinusoid corrupted by additive random noise, spectral analysis distributes the signal and noise components differently, often making it easier to detect the signal's presence or measure certain characteristics, such as amplitude and frequency. Effectively, the signal-to-noise ratio (SNR) is improved by distributing the noise uniformly, while concentrating most of the sinusoid's energy around one frequency. Processing gain is a term often used to describe an SNR improvement. The processing gain of spectral analysis depends on the window function, both its noise bandwidth (B) and its potential scalloping loss. These effects partially offset, because windows with the least scalloping naturally have the most leakage.
Figure 3 depicts the effects of three different window functions on the same data set, comprising two equal strength sinusoids in additive noise. The frequencies of the sinusoids are chosen such that one encounters no scalloping and the other encounters maximum scalloping. Both sinusoids suffer less SNR loss under the Hann window than under the Blackman-Harris window. In general (as mentioned earlier), this is a deterrent to using high-dynamic-range windows in low-dynamic-range applications.
Symmetry
The formulas provided at produce discrete sequences, as if a continuous window function has been "sampled". (See an example at Kaiser window.) Window sequences for spectral analysis are either symmetric or 1-sample short of symmetric (called periodic, DFT-even, or DFT-symmetric). For instance, a true symmetric sequence, with its maximum at a single center-point, is generated by the MATLAB function hann(9,'symmetric'). Deleting the last sample produces a sequence identical to hann(8,'periodic'). Similarly, the sequence hann(8,'symmetric') has two equal center-points.
Some functions have one or two zero-valued end-points, which are unnecessary in most applications. Deleting a zero-valued end-point has no effect on its DTFT (spectral leakage). But the function designed for + 1 or + 2 samples, in anticipation of deleting one or both end points, typically has a slightly narrower main lobe, slightly higher sidelobes, and a slightly smaller noise-bandwidth.
DFT-symmetry
The predecessor of the DFT is the finite Fourier transform, and window functions were "always an odd number of points and exhibit even symmetry about the origin". In that case, the DTFT is entirely real-valued. When the same sequence is shifted into a DFT data window, the DTFT becomes complex-valued except at frequencies spaced at regular intervals of Thus, when sampled by an -length DFT, the samples (called DFT coefficients) are still real-valued. An approximation is to truncate the +1-length sequence (effectively ), and compute an -length DFT. The DTFT (spectral leakage) is slightly affected, but the samples remain real-valued.
The terms DFT-even and periodic refer to the idea that if the truncated sequence were repeated periodically, it would be even-symmetric about and its DTFT would be entirely real-valued. But the actual DTFT is generally complex-valued, except for the DFT coefficients. Spectral plots like those at , are produced by sampling the DTFT at much smaller intervals than and displaying only the magnitude component of the complex numbers.
Periodic summation
An exact method to sample the DTFT of an +1-length sequence at intervals of is described at . Essentially, is combined with (by addition), and an -point DFT is done on the truncated sequence. Similarly, spectral analysis would be done by combining the and data samples before applying the truncated symmetric window. That is not a common practice, even though truncated windows are very popular.
Convolution
The appeal of DFT-symmetric windows is explained by the popularity of the fast Fourier transform (FFT) algorithm for implementation of the DFT, because truncation of an odd-length sequence results in an even-length sequence. Their real-valued DFT coefficients are also an advantage in certain esoteric applications where windowing is achieved by means of convolution between the DFT coefficients and an unwindowed DFT of the data. In those applications, DFT-symmetric windows (even or odd length) from the Cosine-sum family are preferred, because most of their DFT coefficients are zero-valued, making the convolution very efficient.
Some window metrics
When selecting an appropriate window function for an application, this comparison graph may be useful. The frequency axis has units of FFT "bins" when the window of length N is applied to data and a transform of length N is computed. For instance, the value at frequency "bin" is the response that would be measured in bins k and k + 1 to a sinusoidal signal at frequency k + . It is relative to the maximum possible response, which occurs when the signal frequency is an integer number of bins. The value at frequency is referred to as the maximum scalloping loss of the window, which is one metric used to compare windows. The rectangular window is noticeably worse than the others in terms of that metric.
Other metrics that can be seen are the width of the main lobe and the peak level of the sidelobes, which respectively determine the ability to resolve comparable strength signals and disparate strength signals. The rectangular window (for instance) is the best choice for the former and the worst choice for the latter. What cannot be seen from the graphs is that the rectangular window has the best noise bandwidth, which makes it a good candidate for detecting low-level sinusoids in an otherwise white noise environment. Interpolation techniques, such as zero-padding and frequency-shifting, are available to mitigate its potential scalloping loss.
See also
Knife-edge effect, spatial analog of truncation
Gibbs phenomenon
Notes
Page citations
References
Fourier analysis
Digital signal processing
Spectrum (physical sciences) | Spectral leakage | [
"Physics"
] | 2,703 | [
"Waves",
"Physical phenomena",
"Spectrum (physical sciences)"
] |
545,525 | https://en.wikipedia.org/wiki/Health%20physics | Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged.
Sub-specialties
There are many sub-specialties in the field of health physics, including
Ionising radiation instrumentation and measurement
Internal dosimetry and external dosimetry
Radioactive waste management
Radioactive contamination, decontamination and decommissioning
Radiological engineering (shielding, holdup, etc.)
Environmental assessment, radiation monitoring and radon evaluation
Operational radiation protection/health physics
Particle accelerator physics
Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team)
Industrial uses of radioactive material
Medical health physics
Public information and communication involving radioactive materials
Biological effects/radiation biology
Radiation standards
Radiation risk analysis
Nuclear power
Radioactive materials and homeland security
Radiation protection
Nanotechnology
Operational health physics
The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the practical application of health physics knowledge to real-world situations, rather than basic research.
Medical physics
The field of Health Physics is related to the field of medical physics and they are similar to each other in that practitioners rely on much of the same fundamental science (i.e., radiation physics, biology, etc.) in both fields. Health physicists, however, focus on the evaluation and protection of human health from radiation, whereas medical health physicists and medical physicists support the use of radiation and other physics-based technologies by medical practitioners for the diagnosis and treatment of disease.
Radiation protection instruments
Practical ionising radiation measurement is essential for health physics. It enables the evaluation of protection measures, and the assessment of the radiation dose likely, or actually received by individuals. The provision of such instruments is normally controlled by law. In the UK it is the Ionising Radiation Regulations 1999.
The measuring instruments for radiation protection are both "installed" (in a fixed position) and portable (hand-held or transportable).
Installed instruments
Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne contamination monitors.
The area monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations which can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area.
Interlock monitors are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present.
Airborne contamination monitors measure the concentration of radioactive particles in the atmosphere to guard against radioactive particles being deposited in the lungs of personnel.
Personnel exit monitors are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these.
The UK National Physical Laboratory has published a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used.
Portable instruments
Portable instruments are hand-held or transportable.
The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these.
Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations.
Instrument types
A number of commonly used detection instruments are listed below.
ionization chambers
proportional counters
Geiger counters
Semiconductor detectors
Scintillation detectors
The links should be followed for a fuller description of each.
Guidance on use
In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned . This covers all ionising radiation instrument technologies, and is a useful comparative guide.
Radiation dosimeters
Dosimeters are devices worn by the user which measure the radiation dose that the user is receiving.
Common types of wearable dosimeters for ionizing radiation include:
Quartz fiber dosimeter
Film badge dosimeter
Thermoluminescent dosimeter
Solid state (MOSFET or silicon diode) dosimeter
Units of measure
Absorbed dose
The fundamental units do not take into account the amount of damage done to matter (especially living tissue) by ionizing radiation. This is more closely related to the amount of energy deposited rather than the charge. This is called the absorbed dose.
The gray (Gy), with units J/kg, is the SI unit of absorbed dose, which represents the amount of radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter.
The rad (radiation absorbed dose), is the corresponding traditional unit, which is 0.01 J deposited per kg. 100 rad = 1 Gy.
Equivalent dose
Equal doses of different types or energies of radiation cause different amounts of damage to living tissue. For example, 1 Gy of alpha radiation causes about 20 times as much damage as 1 Gy of X-rays. Therefore, the equivalent dose was defined to give an approximate measure of the biological effect of radiation. It is calculated by multiplying the absorbed dose by a weighting factor WR, which is different for each type of radiation (see table at Relative biological effectiveness#Standardization). This weighting factor is also called the Q (quality factor), or RBE (relative biological effectiveness of the radiation).
The sievert (Sv) is the SI unit of equivalent dose. Although it has the same units as the gray, J/kg, it measures something different. For a given type and dose of radiation(s) applied to a certain body part(s) of a certain organism, it measures the magnitude of an X-rays or gamma radiation dose applied to the whole body of the organism, such that the probabilities of the two scenarios to induce cancer is the same according to current statistics.
The rem (Roentgen equivalent man) is the traditional unit of equivalent dose. 1 sievert = 100 rem. Because the rem is a relatively large unit, typical equivalent dose is measured in millirem (mrem), 10−3 rem, or in microsievert (μSv), 10−6 Sv. 1 mrem = 10 μSv.
A unit sometimes used for low-level doses of radiation is the BRET (Background Radiation Equivalent Time). This is the number of days of an average person's background radiation exposure the dose is equivalent to. This unit is not standardized, and depends on the value used for the average background radiation dose. Using the 2000 UNSCEAR value (below), one BRET unit is equal to about 6.6 μSv.
For comparison, the average 'background' dose of natural radiation received by a person per day, based on 2000 UNSCEAR estimate, makes BRET 6.6 μSv (660 μrem). However local exposures vary, with the yearly average in the US being around 3.6 mSv (360 mrem), and in a small area in India as high as 30 mSv (3 rem). The lethal full-body dose of radiation for a human is around 4–5 Sv (400–500 rem).
History
In 1898, The Röntgen Society (Currently the British Institute of Radiology) established a committee on X-ray injuries, thus initiating the discipline of radiation protection.
The term "health physics"
According to Paul Frame:
"The term Health Physics is believed to have originated in the Metallurgical Laboratory at the University of Chicago in 1942, but the exact origin is unknown. The term was possibly coined by Robert Stone or Arthur Compton, since Stone was the head of the Health Division and Arthur Compton was the head of the Metallurgical Laboratory. The first task of the Health Physics Section was to design shielding for reactor CP-1 that Enrico Fermi was constructing, so the original HPs were mostly physicists trying to solve health-related problems. The explanation given by Robert Stone was that '...the term Health Physics has been used on the Plutonium Project to define that field in which physical methods are used to determine the existence of hazards to the health of personnel.'
A variation was given by Raymond Finkle, a Health Division employee during this time frame. 'The coinage at first merely denoted the physics section of the Health Division... the name also served security: 'radiation protection' might arouse unwelcome interest; 'health physics' conveyed nothing.'"
Radiation-related quantities
The following table shows radiation quantities in SI and non-SI units.
Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985.
See also
Health Physics Society
Certified Health Physicist
Radiological Protection of Patients
Radiation protection
Society for Radiological Protection The principal UK body concerned with promoting the science and practice of radiation protection. It is the UK national affiliated body to IRPA
IRPA The International Radiation Protection Association. The International body concerned with promoting the science and practice of radiation protection.
References
External links
The Health Physics Society, a scientific and professional organization whose members specialize in occupational and environmental radiation safety.
- "The confusing world of radiation dosimetry" - M.A. Boyd, 2009, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems.
Q&A: Health effects of radiation exposure, BBC News, 21 July 2011.
Nuclear safety and security
Medical physics
Radiation health effects
Health physicists | Health physics | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,218 | [
"Radiation health effects",
"Applied and interdisciplinary physics",
"Medical physics",
"Radiation effects",
"Radioactivity"
] |
545,825 | https://en.wikipedia.org/wiki/Impulse%20response | In signal processing and control theory, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse (). More generally, an impulse response is the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time (or possibly as a function of some other independent variable that parameterizes the dynamic behavior of the system).
In all these cases, the dynamic system and its impulse response may be actual physical objects, or may be mathematical systems of equations describing such objects.
Since the impulse function contains all frequencies (see the Fourier transform of the Dirac delta function, showing infinite frequency bandwidth that the Dirac delta function has), the impulse response defines the response of a linear time-invariant system for all frequencies.
Mathematical considerations
Mathematically, how the impulse is described depends on whether the system is modeled in discrete or continuous time. The impulse can be modeled as a Dirac delta function for continuous-time systems, or as the discrete unit sample function for discrete-time systems. The Dirac delta represents the limiting case of a pulse made very short in time while maintaining its area or integral (thus giving an infinitely high peak). While this is impossible in any real system, it is a useful idealization. In Fourier analysis theory, such an impulse comprises equal portions of all possible excitation frequencies, which makes it a convenient test probe.
Any system in a large class known as linear, time-invariant (LTI) is completely characterized by its impulse response. That is, for any input, the output can be calculated in terms of the input and the impulse response. (See LTI system theory.) The impulse response of a linear transformation is the image of Dirac's delta function under the transformation, analogous to the fundamental solution of a partial differential operator.
It is usually easier to analyze systems using transfer functions as opposed to impulse responses. The transfer function is the Laplace transform of the impulse response. The Laplace transform of a system's output may be determined by the multiplication of the transfer function with the input's Laplace transform in the complex plane, also known as the frequency domain. An inverse Laplace transform of this result will yield the output in the time domain.
To determine an output directly in the time domain requires the convolution of the input with the impulse response. When the transfer function and the Laplace transform of the input are known, this convolution may be more complicated than the alternative of multiplying two functions in the frequency domain.
The impulse response, considered as a Green's function, can be thought of as an "influence function": how a point of input influences output.
Practical applications
In practical systems, it is not possible to produce a perfect impulse to serve as input for testing; therefore, a brief pulse is sometimes used as an approximation of an impulse. Provided that the pulse is short enough compared to the impulse response, the result will be close to the true, theoretical, impulse response. In many systems, however, driving with a very short strong pulse may drive the system into a nonlinear regime, so instead the system is driven with a pseudo-random sequence, and the impulse response is computed from the input and output signals.
Loudspeakers
An application that demonstrates this idea was the development of impulse response loudspeaker testing in the 1970s. Loudspeakers suffer from phase inaccuracy, a defect unlike other measured properties such as frequency response. Phase inaccuracy is caused by (slightly) delayed frequencies/octaves that are mainly the result of passive cross overs (especially higher order filters) but are also caused by resonance, energy storage in the cone, the internal volume, or the enclosure panels vibrating. Measuring the impulse response, which is a direct plot of this "time-smearing," provided a tool for use in reducing resonances by the use of improved materials for cones and enclosures, as well as changes to the speaker crossover. The need to limit input amplitude to maintain the linearity of the system led to the use of inputs such as pseudo-random maximum length sequences, and to the use of computer processing to derive the impulse response.
Electronic processing
Impulse response analysis is a major facet of radar, ultrasound imaging, and many areas of digital signal processing. An interesting example would be broadband internet connections. DSL/Broadband services use adaptive equalisation techniques to help compensate for signal distortion and interference introduced by the copper phone lines used to deliver the service.
Control systems
In control theory the impulse response is the response of a system to a Dirac delta input. This proves useful in the analysis of dynamic systems; the Laplace transform of the delta function is 1, so the impulse response is equivalent to the inverse Laplace transform of the system's transfer function.
Acoustic and audio applications
In acoustic and audio applications, impulse responses enable the acoustic characteristics of a location, such as a concert hall, to be captured. Various packages are available containing impulse responses from specific locations, ranging from small rooms to large concert halls. These impulse responses can then be utilized in convolution reverb applications to enable the acoustic characteristics of a particular location to be applied to target audio.
In electric guitar signal processing and amplifier modeling, impulse response recordings are often used by modeling software to recreate the recorded tone of guitar speakers.
Economics
In economics, and especially in contemporary macroeconomic modeling, impulse response functions are used to describe how the economy reacts over time to exogenous impulses, which economists usually call shocks, and are often modeled in the context of a vector autoregression. Impulses that are often treated as exogenous from a macroeconomic point of view include changes in government spending, tax rates, and other fiscal policy parameters; changes in the monetary base or other monetary policy parameters; changes in productivity or other technological parameters; and changes in preferences, such as the degree of impatience. Impulse response functions describe the reaction of endogenous macroeconomic variables such as output, consumption, investment, and employment at the time of the shock and over subsequent points in time. Recently, asymmetric impulse response functions have been suggested in the literature that separate the impact of a positive shock from a negative one.
See also
Convolution reverb
Dirac delta function, also called the unit impulse function
Duhamel's principle
Dynamic stochastic general equilibrium
Frequency response
Gibbs phenomenon
Küssner effect
Linear response function
LTI system theory
Point spread function
Pre-echo
Step response
System analysis
Time constant
Transient (oscillation)
Transient response
Variation of parameters
References
External links
Control theory
Time domain analysis
Analog circuits | Impulse response | [
"Mathematics",
"Engineering"
] | 1,372 | [
"Applied mathematics",
"Control theory",
"Analog circuits",
"Electronic engineering",
"Dynamical systems"
] |
545,863 | https://en.wikipedia.org/wiki/Step%20response | The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter.
From a practical standpoint, knowing how the system responds to a sudden input is important because large and possibly fast deviations from the long term steady state may have extreme effects on the component itself and on other portions of the overall system dependent on this component. In addition, the overall system cannot act until the component's output settles down to some vicinity of its final state, delaying the overall system response. Formally, knowing the step response of a dynamical system gives information on the stability of such a system, and on its ability to reach one stationary state when starting from another.
Formal mathematical description
This section provides a formal mathematical definition of step response in terms of the abstract mathematical concept of a dynamical system : all notations and assumptions required for the following description are listed here.
is the evolution parameter of the system, called "time" for the sake of simplicity,
is the state of the system at time , called "output" for the sake of simplicity,
is the dynamical system evolution function,
is the dynamical system initial state,
is the Heaviside step function
Nonlinear dynamical system
For a general dynamical system, the step response is defined as follows:
It is the evolution function when the control inputs (or source term, or forcing inputs) are Heaviside functions: the notation emphasizes this concept showing H(t) as a subscript.
Linear dynamical system
For a linear time-invariant (LTI) black box, let for notational convenience: the step response can be obtained by convolution of the Heaviside step function control and the impulse response h(t) of the system itself
which for an LTI system is equivalent to just integrating the latter. Conversely, for an LTI system, the derivative of the step response yields the impulse response:
However, these simple relations are not true for a non-linear or time-variant system.
Time domain versus frequency domain
Instead of frequency response, system performance may be specified in terms of parameters describing time-dependence of response. The step response can be described by the following quantities related to its time behavior,
overshoot
rise time
settling time
ringing
In the case of linear dynamic systems, much can be inferred about the system from these characteristics. Below the step response of a simple two-pole amplifier is presented, and some of these terms are illustrated.
In LTI systems, the function that has the steepest slew rate that doesn't create overshoot or ringing is the Gaussian function. This is because it is the only function whose Fourier transform has the same shape.
Feedback amplifiers
This section describes the step response of a simple negative feedback amplifier shown in Figure 1. The feedback amplifier consists of a main open-loop amplifier of gain AOL and a feedback loop governed by a feedback factor β. This feedback amplifier is analyzed to determine how its step response depends upon the time constants governing the response of the main amplifier, and upon the amount of feedback used.
A negative-feedback amplifier has gain given by (see negative feedback amplifier):
where AOL = open-loop gain, AFB = closed-loop gain (the gain with negative feedback present) and β = feedback factor.
With one dominant pole
In many cases, the forward amplifier can be sufficiently well modeled in terms of a single dominant pole of time constant τ, that it, as an open-loop gain given by:
with zero-frequency gain A0 and angular frequency ω = 2πf. This forward amplifier has unit step response
,
an exponential approach from 0 toward the new equilibrium value of A0.
The one-pole amplifier's transfer function leads to the closed-loop gain:
This closed-loop gain is of the same form as the open-loop gain: a one-pole filter. Its step response is of the same form: an exponential decay toward the new equilibrium value. But the time constant of the closed-loop step function is τ / (1 + β A0), so it is faster than the forward amplifier's response by a factor of 1 + β A0:
As the feedback factor β is increased, the step response will get faster, until the original assumption of one dominant pole is no longer accurate. If there is a second pole, then as the closed-loop time constant approaches the time constant of the second pole, a two-pole analysis is needed.
Two-pole amplifiers
In the case that the open-loop gain has two poles (two time constants, τ1, τ2), the step response is a bit more complicated. The open-loop gain is given by:
with zero-frequency gain A0 and angular frequency ω = 2πf.
Analysis
The two-pole amplifier's transfer function leads to the closed-loop gain:
The time dependence of the amplifier is easy to discover by switching variables to s = jω, whereupon the gain becomes:
The poles of this expression (that is, the zeros of the denominator) occur at:
which shows for large enough values of βA0 the square root becomes the square root of a negative number, that is the square root becomes imaginary, and the pole positions are complex conjugate numbers, either s+ or s−; see Figure 2:
with
and
Using polar coordinates with the magnitude of the radius to the roots given by |s| (Figure 2):
and the angular coordinate φ is given by:
Tables of Laplace transforms show that the time response of such a system is composed of combinations of the two functions:
which is to say, the solutions are damped oscillations in time. In particular, the unit step response of the system is:
which simplifies to
when A0 tends to infinity and the feedback factor β is one.
Notice that the damping of the response is set by ρ, that is, by the time constants of the open-loop amplifier. In contrast, the frequency of oscillation is set by μ, that is, by the feedback parameter through βA0. Because ρ is a sum of reciprocals of time constants, it is interesting to notice that ρ is dominated by the shorter of the two.
Results
Figure 3 shows the time response to a unit step input for three values of the parameter μ. It can be seen that the frequency of oscillation increases with μ, but the oscillations are contained between the two asymptotes set by the exponentials [ 1 − exp(−ρt) ] and [ 1 + exp(−ρt) ]. These asymptotes are determined by ρ and therefore by the time constants of the open-loop amplifier, independent of feedback.
The phenomenon of oscillation about the final value is called ringing. The overshoot is the maximum swing above final value, and clearly increases with μ. Likewise, the undershoot is the minimum swing below final value, again increasing with μ. The settling time is the time for departures from final value to sink below some specified level, say 10% of final value.
The dependence of settling time upon μ is not obvious, and the approximation of a two-pole system probably is not accurate enough to make any real-world conclusions about feedback dependence of settling time. However, the asymptotes [ 1 − exp(−ρt) ] and [ 1 + exp (−ρt) ] clearly impact settling time, and they are controlled by the time constants of the open-loop amplifier, particularly the shorter of the two time constants. That suggests that a specification on settling time must be met by appropriate design of the open-loop amplifier.
The two major conclusions from this analysis are:
Feedback controls the amplitude of oscillation about final value for a given open-loop amplifier and given values of open-loop time constants, τ1 and τ2.
The open-loop amplifier decides settling time. It sets the time scale of Figure 3, and the faster the open-loop amplifier, the faster this time scale.
As an aside, it may be noted that real-world departures from this linear two-pole model occur due to two major complications: first, real amplifiers have more than two poles, as well as zeros; and second, real amplifiers are nonlinear, so their step response changes with signal amplitude.
Control of overshoot
How overshoot may be controlled by appropriate parameter choices is discussed next.
Using the equations above, the amount of overshoot can be found by differentiating the step response and finding its maximum value. The result for maximum step response Smax is:
The final value of the step response is 1, so the exponential is the actual overshoot itself. It is clear the overshoot is zero if μ = 0, which is the condition:
This quadratic is solved for the ratio of time constants by setting x = (τ1 / τ2)1/2 with the result
Because β A0 ≫ 1, the 1 in the square root can be dropped, and the result is
In words, the first time constant must be much larger than the second. To be more adventurous than a design allowing for no overshoot we can introduce a factor α in the above relation:
and let α be set by the amount of overshoot that is acceptable.
Figure 4 illustrates the procedure. Comparing the top panel (α = 4) with the lower panel (α = 0.5) shows lower values for α increase the rate of response, but increase overshoot. The case α = 2 (center panel) is the maximally flat design that shows no peaking in the Bode gain vs. frequency plot. That design has the rule of thumb built-in safety margin to deal with non-ideal realities like multiple poles (or zeros), nonlinearity (signal amplitude dependence) and manufacturing variations, any of which can lead to too much overshoot. The adjustment of the pole separation (that is, setting α) is the subject of frequency compensation, and one such method is pole splitting.
Control of settling time
The amplitude of ringing in the step response in Figure 3 is governed by the damping factor exp(−ρt). That is, if we specify some acceptable step response deviation from final value, say Δ, that is:
this condition is satisfied regardless of the value of β AOL provided the time is longer than the settling time, say tS, given by:
where the τ1 ≫ τ2 is applicable because of the overshoot control condition, which makes τ1 = αβAOL τ2. Often the settling time condition is referred to by saying the settling period is inversely proportional to the unity gain bandwidth, because 1/(2π τ2) is close to this bandwidth for an amplifier with typical dominant pole compensation. However, this result is more precise than this rule of thumb. As an example of this formula, if the settling time condition is tS = 8 τ2.
In general, control of overshoot sets the time constant ratio, and settling time tS sets τ2.
System Identification using the Step Response: System with two real poles
This method uses significant points of the step response. There is no need to guess tangents to the measured Signal. The equations are derived using numerical simulations, determining some significant ratios and fitting parameters of nonlinear equations. See also.
Here the steps:
Measure the system step-response of the system with an input step signal .
Determine the time-spans and where the step response reaches 25% and 75% of the steady state output value.
Determine the system steady-state gain with
Calculate
Determine the two time constants
Calculate the transfer function of the identified system within the Laplace-domain
Phase margin
Next, the choice of pole ratio τ1/τ2 is related to the phase margin of the feedback amplifier. The procedure outlined in the Bode plot article is followed. Figure 5 is the Bode gain plot for the two-pole amplifier in the range of frequencies up to the second pole position. The assumption behind Figure 5 is that the frequency f0 dB lies between the lowest pole at f1 = 1/(2πτ1) and the second pole at f2 = 1/(2πτ2). As indicated in Figure 5, this condition is satisfied for values of α ≥ 1.
Using Figure 5 the frequency (denoted by f0 dB) is found where the loop gain βA0 satisfies the unity gain or 0 dB condition, as defined by:
The slope of the downward leg of the gain plot is (20 dB/decade); for every factor of ten increase in frequency, the gain drops by the same factor:
The phase margin is the departure of the phase at f0 dB from −180°. Thus, the margin is:
Because f0 dB / f1 = βA0 ≫ 1, the term in f1 is 90°. That makes the phase margin:
In particular, for case α = 1, φm = 45°, and for α = 2, φm = 63.4°. Sansen recommends α = 3, φm = 71.6° as a "good safety position to start with".
If α is increased by shortening τ2, the settling time tS also is shortened. If α is increased by lengthening τ1, the settling time tS is little altered. More commonly, both τ1 and τ2 change, for example if the technique of pole splitting is used.
As an aside, for an amplifier with more than two poles, the diagram of Figure 5 still may be made to fit the Bode plots by making f2 a fitting parameter, referred to as an "equivalent second pole" position.
See also
Impulse response
Overshoot (signal)
Pole splitting
Rise time
Settling time
Time constant
References and notes
Further reading
Robert I. Demrow Settling time of operational amplifiers
Cezmi Kayabasi Settling time measurement techniques achieving high precision at high speeds
Vladimir Igorevic Arnol'd "Ordinary differential equations", various editions from MIT Press and from Springer Verlag, chapter 1 "Fundamental concepts"
External links
Kuo power point slides; Chapter 7 especially
Analog circuits
Electronic design
Dynamical systems
Classical control theory
Signal processing
Electronic amplifiers
Transient response characteristics
es:Análisis de la respuesta temporal de un sistema#Respuesta del sistema a una entrada del tipo escalón | Step response | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 3,004 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Electronic design",
"Analog circuits",
"Electronic engineering",
"Mechanics",
"Electronic amplifiers",
"Amplifiers",
"Design",
"Dynamical systems"
] |
546,004 | https://en.wikipedia.org/wiki/Norcarane | Norcarane, or bicyclo[4.1.0]heptane, is a colorless liquid. It is an organic compound prepared using the Simmons–Smith reaction, by the action of diiodomethane and a zinc-copper couple on cyclohexene in diethyl ether.
References
Hydrocarbons
Cyclopropanes
Cyclohexanes
Bicycloalkanes | Norcarane | [
"Chemistry"
] | 89 | [
"Organic compounds",
"Hydrocarbons"
] |
546,101 | https://en.wikipedia.org/wiki/State%20space%20%28computer%20science%29 | In computer science, a state space is a discrete space representing the set of all possible configurations of a "system". It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory.
For instance, the toy problem Vacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are the natural numbers starting at 1 and are incremented over time has an infinite discrete state space. The angular position of an undamped pendulum is a continuous (and therefore infinite) state space.
Definition
State spaces are useful in computer science as a simple model of machines. Formally, a state space can be defined as a tuple [N, A, S, G] where:
N is a set of states
A is a set of arcs connecting the states
S is a nonempty subset of N that contains start states
G is a nonempty subset of N that contains the goal states.
Properties
A state space has some common properties:
complexity, where branching factor is important
structure of the space, see also graph theory:
directionality of arcs
tree
rooted graph
For example, the Vacuum World has a branching factor of 4, as the vacuum cleaner can end up in 1 of 4 adjacent squares after moving (assuming it cannot stay in the same square nor move diagonally). The arcs of Vacuum World are bidirectional, since any square can be reached from any adjacent square, and the state space is not a tree since it is possible to enter a loop by moving between any 4 adjacent squares.
State spaces can be either infinite or finite, and discrete or continuous.
Size
The size of the state space for a given system is the number of possible configurations of the space.
Finite
If the size of the state space is finite, calculating the size of the state space is a combinatorial problem. For example, in the Eight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard. This is the same as choosing 8 positions without replacement from a set of 64, or
This is significantly greater than the number of legal configurations of the queens, 92. In many games the effective state space is small compared to all reachable/legal states. This property is also observed in Chess, where the effective state space is the set of positions that can be reached by game-legal moves. This is far smaller than the set of positions that can be achieved by placing combinations of the available chess pieces directly on the board.
Infinite
All continuous state spaces can be described by a corresponding continuous function and are therefore infinite. Discrete state spaces can also have (countably) infinite size, such as the state space of the time-dependent "counter" system, similar to the system in queueing theory defining the number of customers in a line, which would have state space {0, 1, 2, 3, ...}.
Exploration
Exploring a state space is the process of enumerating possible states in search of a goal state. The state space of Pacman, for example, contains a goal state whenever all food pellets have been eaten, and is explored by moving Pacman around the board.
Search States
A search state is a compressed representation of a world state in a state space, and is used for exploration. Search states are used because a state space often encodes more information than is necessary to explore the space. Compressing each world state to only information needed for exploration improves efficiency by reducing the number of states in the search. For example, a state in the Pacman space includes information about the direction Pacman is facing (up, down, left, or right). Since it does not cost anything to change directions in Pacman, search states for Pacman would not include this information and reduce the size of the search space by a factor of 4, one for each direction Pacman could be facing.
Methods
Standard search algorithms are effective in exploring discrete state spaces. The following algorithms exhibit both completeness and optimality in searching a state space.
Breadth-First Search
A* Search
Uniform Cost Search
These methods do not extend naturally to exploring continuous state spaces. Exploring a continuous state space in search of a given goal state is equivalent to optimizing an arbitrary continuous function which is not always possible, see mathematical optimization.
See also
Phase space for information about phase state (like continuous state space) in physics and mathematics.
Probability space for information about state space in probability.
Game complexity theory, which relies on the state space of game outcomes
Cognitive Model#Dynamical systems for information about state space with a dynamical systems model of cognition.
State space planning
State (computer science)
Artificial intelligence
Dynamical systems
Glossary of artificial intelligence
Machine learning
Mathematical optimization
Multi-agent system
Game theory
Combinatorics
References
Models of computation
Dynamical systems
Reconfiguration | State space (computer science) | [
"Physics",
"Mathematics"
] | 1,006 | [
"Reconfiguration",
"Computational problems",
"Mechanics",
"Mathematical problems",
"Dynamical systems"
] |
546,120 | https://en.wikipedia.org/wiki/Recurrence%20plot | In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment in time, the times at which the state of a dynamical system returns to the previous state at ,
i.e., when the phase space trajectory visits roughly the same area in the phase space as at time . In other words, it is a plot of
showing on a horizontal axis and on a vertical axis, where is the state of the system (or its phase space trajectory).
Background
Natural processes can have a distinct recurrent behaviour, e.g. periodicities (as seasonal or Milankovich cycles), but also irregular cyclicities (as El Niño Southern Oscillation, heart beat intervals). Moreover, the recurrence of states, in the meaning that states are again arbitrarily close after some time of divergence, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems (cf. Poincaré recurrence theorem). The recurrence of states in nature has been known for a long time and has also been discussed in early work (e.g. Henri Poincaré 1890).
Detailed description
One way to visualize the recurring nature of states by their trajectory through a phase space is the recurrence plot, introduced by Eckmann et al. (1987). Often, the phase space does not have a low enough dimension (two or three) to be pictured, since higher-dimensional phase spaces can only be visualized by projection into the two or three-dimensional sub-spaces. One frequently used tool to study the behaviour of such phase space trajectories is then the Poincaré map. Another tool, is the recurrence plot, which enables us to investigate many aspects of the m-dimensional phase space trajectory through a two-dimensional representation.
At a recurrence the trajectory returns to a location (state) in phase space it has visited before up to a small error . The recurrence plot represents the collection of pairs of times of such recurrences, i.e., the set of with , with and discrete points of time and the state of the system at time (location of the trajectory at time ).
Mathematically, this is expressed by the binary recurrence matrix
where is a norm and the recurrence threshold. An alternative, more formal expression is using the Heaviside step function
with the norm of distance vector between and .
Alternative recurrence definitions consider different distances , e.g., angular distance, fuzzy distance, or edit distance.
The recurrence plot visualises with coloured (mostly black) dot at coordinates if , with time at the - and -axes.
If only a univariate time series is available, the phase space can be reconstructed, e.g., by using a time delay embedding (see Takens' theorem):
where is the time series (with and the sampling time), the embedding dimension and the time delay. However, phase space reconstruction is not essential part of the recurrence plot (although often stated in literature), because it is based on phase space trajectories which could be derived from the system's variables directly (e.g., from the three variables of the Lorenz system) or from multivariate data.
The visual appearance of a recurrence plot gives hints about the dynamics of the system. Caused by characteristic behaviour of the phase space trajectory, a recurrence plot contains typical small-scale structures, as single dots, diagonal lines and vertical/horizontal lines (or a mixture of the latter, which combines to extended clusters). The large-scale structure, also called texture, can be visually characterised by homogenous, periodic, drift or disrupted. For example, the plot can show if the trajectory is strictly periodic with period , then all such pairs of times will be separated by a multiple of and visible as diagonal lines.
The small-scale structures in recurrence plots contain information about certain characteristics of the dynamics of the underlying system. For example, the length of the diagonal lines visible in the recurrence plot are related to the divergence of phase space trajectories, thus, can represent information about the chaoticity. Therefore, the recurrence quantification analysis quantifies the distribution of these small-scale structures. This quantification can be used to describe the recurrence plots in a quantitative way. Applications are classification, predictions, nonlinear parameter estimation, and transition analysis. In contrast to the heuristic approach of the recurrence quantification analysis, which depends on the choice of the embedding parameters, some dynamical invariants as correlation dimension, K2 entropy or mutual information, which are independent on the embedding, can also be derived from recurrence plots. The base for these dynamical invariants are the recurrence rate and the distribution of the lengths of the diagonal lines. More recent applications use recurrence plots as a tool for time series imaging in machine learning approaches and studying spatio-temporal recurrences.
Close returns plots are similar to recurrence plots. The difference is that the relative time between recurrences is used for the -axis (instead of absolute time).
The main advantage of recurrence plots is that they provide useful information even for short and non-stationary data, where other methods fail.
Extensions
Multivariate extensions of recurrence plots were developed as cross recurrence plots and joint recurrence plots.
Cross recurrence plots consider the phase space trajectories of two different systems in the same phase space:
The dimension of both systems must be the same, but the number of considered states (i.e. data length) can be different. Cross recurrence plots compare the occurrences of similar states of two systems. They can be used in order to analyse the similarity of the dynamical evolution between two different systems, to look for similar matching patterns in two systems, or to study the time-relationship of two similar systems, whose time-scale differ.
Joint recurrence plots are the Hadamard product of the recurrence plots of the considered sub-systems, e.g. for two systems and the joint recurrence plot is
In contrast to cross recurrence plots, joint recurrence plots compare the simultaneous occurrence of recurrences in two (or more) systems. Moreover, the dimension of the considered phase spaces can be different, but the number of the considered states has to be the same for all the sub-systems. Joint recurrence plots can be used in order to detect phase synchronisation.
Example
See also
Poincaré plot
Recurrence period density entropy, an information-theoretic method for summarising the recurrence properties of both deterministic and stochastic dynamical systems.
Recurrence quantification analysis, a heuristic approach to quantify recurrence plots.
Self-similarity matrix
Dot plot (bioinformatics)
References
External links
Recurrence Plot
Plots (graphics)
Signal processing
Dynamical systems
Visualization (graphics)
Chaos theory
Scaling symmetries | Recurrence plot | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,481 | [
"Symmetry",
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Mechanics",
"Scaling symmetries",
"Dynamical systems"
] |
546,406 | https://en.wikipedia.org/wiki/Toll-like%20receptor | Toll-like receptors (TLRs) are a class of proteins that play a key role in the innate immune system. They are single-spanning receptors usually expressed on sentinel cells such as macrophages and dendritic cells, that recognize structurally conserved molecules derived from microbes. Once these microbes have reached physical barriers such as the skin or intestinal tract mucosa, they are recognized by TLRs, which activate immune cell responses. The TLRs include TLR1, TLR2, TLR3, TLR4, TLR5, TLR6, TLR7, TLR8, TLR9, TLR10, TLR11, TLR12, and TLR13. Humans lack genes for TLR11, TLR12 and TLR13 and mice lack a functional gene for TLR10. The receptors TLR1, TLR2, TLR4, TLR5, TLR6, and TLR10 are located on the cell membrane, whereas TLR3, TLR7, TLR8, and TLR9 are located in intracellular vesicles (because they are sensors of nucleic acids).
TLRs received their name from their similarity to the protein coded by the toll gene.
Function
The ability of the immune system to recognize molecules that are broadly shared by pathogens is, in part, due to the presence of immune receptors called toll-like receptors (TLRs) that are expressed on the membranes of leukocytes including dendritic cells, macrophages, natural killer cells, cells of the adaptive immunity T cells, and B cells, and non-immune cells (epithelial and endothelial cells, and fibroblasts).
The binding of ligands — either in the form of adjuvant used in vaccinations or in the form of invasive moieties during times of natural infection — to the TLR marks the key molecular events that ultimately lead to innate immune responses and the development of antigen-specific acquired immunity.
Upon activation, TLRs recruit adaptor proteins (proteins that mediate other protein-protein interactions) within the cytosol of the immune cell to propagate the antigen-induced signal transduction pathway. These recruited proteins are then responsible for the subsequent activation of other downstream proteins, including protein kinases (IKKi, IRAK1, IRAK4, and TBK1) that further amplify the signal and ultimately lead to the upregulation or suppression of genes that orchestrate inflammatory responses and other transcriptional events. Some of these events lead to cytokine production, proliferation, and survival, while others lead to greater adaptive immunity. If the ligand is a bacterial factor, the pathogen might be phagocytosed and digested, and its antigens presented to CD4+ T cells.
In the case of a viral factor, the infected cell may shut off its protein synthesis and may undergo programmed cell death (apoptosis). Immune cells that have detected a virus may also release anti-viral factors such as interferons.
Toll-like receptors have also been shown to be an important link between innate and adaptive immunity through their presence in dendritic cells. Flagellin, a TLR5 ligand, induces cytokine secretion on interacting with TLR5 on human T cells.
Superfamily
TLRs are a type of pattern recognition receptor (PRR) and recognize molecules that are broadly shared by pathogens but distinguishable from host molecules, collectively referred to as pathogen-associated molecular patterns (PAMPs). In addition to the recognition of exogenous PAMPs, TLRs can also bind to endogenous damage-associated molecular patterns (DAMPs) such as heat shock proteins (HSPs) or plasma membrane constituents. TLRs together with the Interleukin-1 receptors form a receptor superfamily, known as the "interleukin-1 receptor / toll-like receptor superfamily"; all members of this family have in common a so-called TIR (toll-IL-1 receptor) domain.
Three subgroups of TIR domains exist. Proteins with subgroup 1 TIR domains are receptors for interleukins that are produced by macrophages, monocytes, and dendritic cells and all have extracellular Immunoglobulin (Ig) domains. Proteins with subgroup 2 TIR domains are classical TLRs, and bind directly or indirectly to molecules of microbial origin. A third subgroup of proteins containing TIR domains consists of adaptor proteins that are exclusively cytosolic and mediate signaling from proteins of subgroups 1 and 2.
Extended family
TLRs are present in vertebrates as well as invertebrates. Molecular building blocks of the TLRs are represented in bacteria and in plants, and plant pattern recognition receptors are well known to be required for host defence against infection. The TLRs thus appear to be one of the most ancient, conserved components of the immune system.
In recent years TLRs were identified also in the mammalian nervous system. Members of the TLR family were detected on glia, neurons and on neural progenitor cells in which they regulate cell-fate decision.
It has been estimated that most mammalian species have between ten and fifteen types of toll-like receptors. Thirteen TLRs (named simply TLR1 to TLR13) have been identified in humans and mice together, and equivalent forms of many of these have been found in other mammalian species. However, equivalents of certain TLR found in humans are not present in all mammals. For example, a gene coding for a protein analogous to TLR10 in humans is present in mice, but appears to have been damaged at some point in the past by a retrovirus. On the other hand, mice express TLRs 11, 12, and 13, none of which is represented in humans. Other mammals may express TLRs that are not found in humans. Other non-mammalian species may have TLRs distinct from mammals, as demonstrated by the anti-cell-wall TLR14, which is found in the Takifugu pufferfish. This may complicate the process of using experimental animals as models of human innate immunity.
Vertebrate TLRs are divided by similarity into the families of TLR 1/2/6/10/14/15, TLR 3, TLR 4, TLR 5, TLR 7/8/9, and TLR 11/12/13/16/21/22/23.
TLRs in Drosophila immunity
The involvement of toll signalling in immunity was first demonstrated in the fruit fly, Drosophila melanogaster. Fruit flies have only innate immune responses allowing studies to avoid interference of adaptive immune mechanisms on signal transduction. The fly response to fungal or bacterial infection occurs through two distinct signalling cascades, one of which is the toll pathway and the other is the immune deficiency pathway. The toll pathway is similar to mammalian TLR signalling, but unlike mammalian TLRs, toll is not activated directly by pathogen-associated molecular patterns (PAMPs). Its receptor ectodomain recognizes the cleaved form of the cytokine spätzle, which is secreted in the haemolymph as an inactive dimeric precursor. The toll receptor shares the cytoplasmatic TIR domain with mammalian TLRs, but the ectodomain and intracytoplasmatic tail are different. This difference might reflect a function of these receptors as cytokine receptors rather than PRRs.
The toll pathway is activated by different stimuli, such as gram-positive bacteria, fungi, and virulence factors. First, the Spätzle processing enzyme (SPE) is activated in response to infection and cleaves spätzle (spz). Cleaved spätzle then binds to the toll receptor and crosslinks its ectodomains. This triggers conformational changes in the receptor resulting in signalling through toll. From this point forward, the signalling cascade is very similar to mammalian signalling through TLRs. The toll-induced signalling complex (TICS) is composed of MyD88, Tube, and Pelle (the orthologue of mammalian IRAK). Signal from TICS is then transduced to Cactus (homologue of mammalian IκB), phosphorylated Cactus is polyubiquitylated and degraded, allowing nuclear translocation of DIF (dorsal-related immunity factor; a homologue of mammalian NF-κB) and induction of transcription of genes for antimicrobial peptides (AMPs) such as drosomycin.
Drosophila have a total of 9 toll family and 6 spz family genes that interact with each other to differing degrees.
TLR2
TLR2 has also been designated as CD282 (cluster of differentiation 282).
TLR3
TLR3 does not use the MyD88 dependent pathway. Its ligand is retroviral double-stranded RNA (dsRNA), which activates the TRIF dependent signalling pathway. To explore the role of this pathway in retroviral reprograming, knock down techniques of TLR3 or TRIF were prepared, and results showed that only the TLR3 pathway is required for full induction of target gene expression by the retrovirus expression vector. This retroviral expression of four transcriptional factors (Oct4, Sox2, Klf4 and c-Myc; OSKM) induces pluripotency in somatic cells. This is supported by study, which shows, that efficiency and amount of human iPSC generation, using retroviral vectors, is reduced by knockdown of the pathway with peptide inhibitors or shRNA knockdown of TLR3 or its adaptor protein TRIF. Taken together, stimulation of TLR3 causes great changes in chromatin remodeling and nuclear reprogramming, and activation of inflammatory pathways is required for these changes, induction of pluripotency genes and generation of human induced pluripotent stem cells (iPSC) colonies.
TLR11
As noted above, human cells do not express TLR11, but mice cells do. Mouse-specific TLR11 recognizes uropathogenic E.coli and the apicomplexan parasite Toxoplasma gondii. With Toxoplasma its ligand is the protein profilin and the ligand for E. coli is flagellin. The flagellin from the enteropathogen Salmonella is also recognized by TLR11.
As mouse TLR11 is able to recognize Salmonella effectively, normal mice do not get infected by oral Salmonella Typhi, which causes food- and waterborne gastroenteritis and typhoid fever in humans. TLR11 deficient knockout mice, on the other hand, are efficiently infected. As a result, this knockout mouse can act as a disease model of human typhoid fever.
Summary of known mammalian TLRs
Toll-like receptors bind and become activated by different ligands, which, in turn, are located on different types of organisms or structures. They also have different adapters to respond to activation and are located sometimes at the cell surface and sometimes to internal cell compartments. Furthermore, they are expressed by different types of leucocytes or other cell types:
Ligands
Because of the specificity of toll-like receptors (and other innate immune receptors) they cannot easily be changed in the course of evolution, these receptors recognize molecules that are constantly associated with threats (i.e., pathogen or cell stress) and are highly specific to these threats (i.e., cannot be mistaken for self molecules that are normally expressed under physiological conditions). Pathogen-associated molecules that meet this requirement are thought to be critical to the pathogen's function and difficult to change through mutation; they are said to be evolutionarily conserved. Somewhat conserved features in pathogens include bacterial cell-surface lipopolysaccharides (LPS), lipoproteins, lipopeptides, and lipoarabinomannan; proteins such as flagellin from bacterial flagella; double-stranded RNA of viruses; or the unmethylated CpG islands of bacterial and viral DNA; and also of the CpG islands found in the promoters of eukaryotic DNA; as well as certain other RNA and DNA molecules. As TLR ligands are present in most pathogens, they may also be present in pathogen-derived vaccines (e.g. MMR, influenza, polio vaccines) most commercially available vaccines have been assessed for their inherent TLR ligands' capacity to activate distinct subsets of immune cells. For most of the TLRs, ligand recognition specificity has now been established by gene targeting (also known as "gene knockout"): a technique by which individual genes may be selectively deleted in mice. See the table above for a summary of known TLR ligands.
Endogenous ligands
The stereotypic inflammatory response provoked by toll-like receptor activation has prompted speculation that endogenous activators of toll-like receptors might participate in autoimmune diseases. TLRs have been suspected of binding to host molecules including fibrinogen (involved in blood clotting), heat shock proteins (HSPs), HMGB1, extracellular matrix components and self DNA (it is normally degraded by nucleases, but under inflammatory and autoimmune conditions it can form a complex with endogenous proteins, become resistant to these nucleases and gain access to endosomal TLRs as TLR7 or TLR9). These endogenous ligands are usually produced as a result of non-physiological cell death.
Signaling
TLRs are believed to function as dimers. Though most TLRs appear to function as homodimers, TLR2 forms heterodimers with TLR1 or TLR6, each dimer having a different ligand specificity. TLRs may also depend on other co-receptors for full ligand sensitivity, such as in the case of TLR4's recognition of LPS, which requires MD-2. CD14 and LPS-Binding Protein (LBP) are known to facilitate the presentation of LPS to MD-2.
A set of endosomal TLRs comprising TLR3, TLR7, TLR8 and TLR9 recognize nucleic acid derived from viruses as well as endogenous nucleic acids in context of pathogenic events. Activation of these receptor leads to production of inflammatory cytokines as well as type I interferons (interferon type I) to help fight viral infection.
The adapter proteins and kinases that mediate TLR signaling have also been targeted. In addition, random germline mutagenesis with ENU has been used to decipher the TLR signaling pathways. When activated, TLRs recruit adapter molecules within the cytoplasm of cells to propagate a signal. Four adapter molecules are known to be involved in signaling. These proteins are known as MyD88, TIRAP (also called Mal), TRIF, and TRAM (TRIF-related adaptor molecule).
TLR signaling is divided into two distinct signaling pathways, the MyD88-dependent and TRIF-dependent pathway.
MyD88-dependent pathway
The MyD88-dependent response occurs on dimerization of TLRs, and is used by every TLR except TLR3. Its primary effect is activation of NFκB and mitogen-activated protein kinase. Ligand binding and conformational change that occurs in the receptor recruits the adaptor protein MyD88, a member of the TIR family. MyD88 then recruits IRAK4, IRAK1 and IRAK2. IRAK kinases then phosphorylate and activate the protein TRAF6, which in turn polyubiquinates the protein TAK1, as well as itself to facilitate binding to IKK-β. On binding, TAK1 phosphorylates IKK-β, which then phosphorylates IκB causing its degradation and allowing NFκB to diffuse into the cell nucleus and activate transcription and consequent induction of inflammatory cytokines.
TRIF-dependent pathway
Both TLR3 and TLR4 use the TRIF-dependent pathway, which is triggered by dsRNA and LPS, respectively. For TLR3, dsRNA leads to activation of the receptor, recruiting the adaptor TRIF. TRIF activates the kinases TBK1 and RIPK1, which creates a branch in the signaling pathway. The TRIF/TBK1 signaling complex phosphorylates IRF3 allowing its translocation into the nucleus and production of interferon type I. Meanwhile, activation of RIPK1 causes the polyubiquitination and activation of TAK1 and NFκB transcription in the same manner as the MyD88-dependent pathway.
TLR signaling ultimately leads to the induction or suppression of genes that orchestrate the inflammatory response. In all, thousands of genes are activated by TLR signaling, and collectively, the TLRs constitute one of the most pleiotropic yet tightly regulated gateways for gene modulation.
TLR4 is the only TLR that uses all four adaptors. Complex consisting of TLR4, MD2 and LPS recruits TIR domain-containing adaptors TIRAP and MyD88 and thus initiates activation of NFκB (early phase) and MAPK. TLR4-MD2-LPS complex then undergoes endocytosis and in endosome it forms a signaling complex with TRAM and TRIF adaptors. This TRIF-dependent pathway again leads to IRF3 activation and production of type I interferons, but it also activates late-phase NFκB activation. Both late and early phase activation of NFκB is required for production of inflammatory cytokines.
Medical relevance
Imiquimod (cardinally used in dermatology) is a TLR7 agonist, and its successor resiquimod, is a TLR7 and TLR8 agonist. Recently, resiquimod has been explored as an agent for cancer immunotherapy, acting through stimulation of tumor-associated macrophages.
Several TLR ligands are in clinical development or being tested in animal models as vaccine adjuvants, with the first clinical use in humans in a recombinant herpes zoster vaccine in 2017, which contains a monophosphoryl lipid A component.
TLR7 messenger RNA expression levels in dairy animals in a natural outbreak of foot-and-mouth disease have been reported.
TLR4 has been shown to be important for the long-term side-effects of opioids. Its activation leads to downstream release of inflammatory modulators including TNF-α and IL-1β, and constant low-level release of these modulators is thought to reduce the efficacy of opioid drug treatment with time, and is involved in opioid tolerance, hyperalgesia and allodynia. Morphine induced TLR4 activation attenuates pain suppression by opioids and enhances the development of opioid tolerance and addiction, drug abuse, and other negative side effects such as respiratory depression and hyperalgesia. Drugs that block the action of TNF-α or IL-1β have been shown to increase the analgesic effects of opioids and reduce the development of tolerance and other side-effects, and this has also been demonstrated with drugs that block TLR4 itself.
The "unnatural" enantiomers of opioid drugs such as (+)-morphine and (+)-naloxone lack affinity for opioid receptors, still produce the same activity at TLR4 as their "normal" enantiomers. So, "unnatural" entianomers of opioids such as (+)-naloxone, can be used to block the TLR4 activity of opioid analgesic drugs without having any affinity for μ-opioid receptor
Discovery
When microbes were first recognized as the cause of infectious diseases, it was immediately clear that multicellular organisms must be capable of recognizing them when infected and, hence, capable of recognizing molecules unique to microbes. A large body of literature, spanning most of the last century, attests to the search for the key molecules and their receptors. More than 100 years ago, Richard Pfeiffer, a student of Robert Koch, coined the term "endotoxin" to describe a substance produced by Gram-negative bacteria that could provoke fever and shock in experimental animals. In the decades that followed, endotoxin was chemically characterized and identified as a lipopolysaccharide (LPS) produced by most Gram-negative bacteria. This lipopolysaccharide is an integral part of the gram-negative membrane and is released upon destruction of the bacterium. Other molecules (bacterial lipopeptides, flagellin, and unmethylated DNA) were shown in turn to provoke host responses that are normally protective. However, these responses can be detrimental if they are excessively prolonged or intense. It followed logically that there must be receptors for such molecules, capable of alerting the host to the presence of infection, but these remained elusive for many years. Toll-like receptors are now counted among the key molecules that alert the immune system to the presence of microbial infections.
The prototypic member of the family, the toll receptor (; Tl) in the fruit fly Drosophila melanogaster, was discovered in 1985 by 1995 Nobel Laureates Christiane Nüsslein-Volhard and Eric Wieschaus and colleagues. It was known for its developmental function in embryogenesis by establishing the dorsal-ventral axis. It was named after Christiane Nüsslein-Volhard's 1985 exclamation, "" ("That's amazing!"), in reference to the underdeveloped ventral portion of a fruit fly larva. It was cloned by the laboratory of Kathryn Anderson in 1988. In 1996, toll was found by Jules A. Hoffmann and his colleagues to have an essential role in the fly's immunity to fungal infection, which it achieved by activating the synthesis of antimicrobial peptides.
The first reported human toll-like receptor was described by Nomura and colleagues in 1994, mapped to a chromosome by Taguchi and colleagues in 1996. Because the immune function of toll in Drosophila was not then known, it was assumed that TIL (now known as TLR1) might participate in mammalian development. However, in 1991 (prior to the discovery of TIL) it was observed that a molecule with a clear role in immune function in mammals, the interleukin-1 (IL-1) receptor, also had homology to drosophila toll; the cytoplasmic portions of both molecules were similar.
In 1997, Charles Janeway and Ruslan Medzhitov showed that a toll-like receptor now known as TLR4 could, when artificially ligated using antibodies, induce the activation of certain genes necessary for initiating an adaptive immune response. TLR 4 function as an LPS sensing receptor was discovered by Bruce A. Beutler and colleagues. These workers used positional cloning to prove that mice that could not respond to LPS had mutations that abolished the function of TLR4. This identified TLR4 as one of the key components of the receptor for LPS.
In turn, the other TLR genes were ablated in mice by gene targeting, largely in the laboratory of Shizuo Akira and colleagues. Each TLR is now believed to detect a discrete collection of molecules — some of microbial origin, and some products of cell damage — and to signal the presence of infections.
Plant homologs of toll were discovered by Pamela Ronald in 1995 (rice XA21) and Thomas Boller in 2000 (Arabidopsis FLS2).
In 2011, Beutler and Hoffmann were awarded the Nobel Prize in Medicine or Physiology for their work. Hoffmann and Akira received the Canada Gairdner International Award in 2011.
Notes and references
See also
NOD-like receptor
Immunologic adjuvant
RIG-I-like receptor
External links
TollML: Toll-like receptors and ligands database at University of Munich
The Toll-Like Receptor Family of Innate Immune Receptors (pdf)
Toll-Like receptor Pathway
BioScience Animations
Developmental genetics
Insect immunity
LRR proteins
Signal transduction | Toll-like receptor | [
"Chemistry",
"Biology"
] | 5,114 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
546,674 | https://en.wikipedia.org/wiki/126%20%28number%29 | 126 (one hundred [and] twenty-six) is the natural number following 125 and preceding 127.
In mathematics
As the binomial coefficient , 126 is a central binomial coefficient, and in Pascal's Triangle, it is a pentatope number. 126 is a sum of two cubes, and since 125 + 1 is σ3(5), 126 is the fifth value of the sum of cubed divisors function.
126 is the fifth -perfect Granville number, and the third such not to be a perfect number. Also, it is known to be the smallest Granville number with three distinct prime factors, and perhaps the only such Granville number.
126 is a pentagonal pyramidal number and a decagonal number. 126 is also the different number of ways to partition a decagon into even polygons by diagonals, and the number of crossing points among the diagonals of a regular nonagon.
There are exactly 126 binary strings of length seven that are not repetitions of a shorter string, and 126 different semigroups on four elements (up to isomorphism and reversal).
There are exactly 126 positive integers that are not solutions of the equation
where a, b, c, and d must themselves all be positive integers.
126 is the number of root vectors of simple Lie group E7.
In physics
126 is the seventh magic number in nuclear physics. For each of these numbers, 2, 8, 20, 28, 50, 82, and 126, an atomic nucleus with this many protons is or is predicted to be more stable than for other numbers. Thus, although there has been no experimental discovery of element 126, tentatively called unbihexium, it is predicted to belong to an island of stability that might allow it to exist with a long enough half life that its existence could be detected.
References
Integers | 126 (number) | [
"Mathematics"
] | 379 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
547,201 | https://en.wikipedia.org/wiki/Quadrupole%20magnet | Quadrupole magnets, abbreviated as Q-magnets, consist of groups of four magnets laid out so that in the planar multipole expansion of the field, the dipole terms cancel and where the lowest significant terms in the field equations are quadrupole. Quadrupole magnets are useful as they create a magnetic field whose magnitude grows rapidly with the radial distance from its longitudinal axis. This is used in particle beam focusing.
The simplest magnetic quadrupole is two identical bar magnets parallel to each other such that the north pole of one is next to the south of the other and vice versa. Such a configuration will have no dipole moment, and its field will decrease at large distances faster than that of a dipole. A stronger version with very little external field involves using a k=3 Halbach cylinder.
In some designs of quadrupoles using electromagnets, there are four steel pole tips: two opposing magnetic north poles and two opposing magnetic south poles. The steel is magnetized by a large electric current in the coils of tubing wrapped around the poles. Another design is a Helmholtz coil layout but with the current in one of the coils reversed.
Quadrupoles in particle accelerators
At the particle speeds reached in high energy particle accelerators, the magnetic force term is larger than the electric term in the Lorentz force:
and thus magnetic deflection is more effective than electrostatic deflection. Therefore a 'lattice' of electromagnets is used to bend, steer and focus a charged particle beam.
The quadrupoles in the lattice are of two types: 'F quadrupoles' (which are horizontally focusing but vertically defocusing) and 'D quadrupoles' (which are vertically focusing but horizontally defocusing). This situation is due to the laws of electromagnetism (the Maxwell equations) which show that it is impossible for a quadrupole to focus in both planes at the same time. The image on the right shows an example of a quadrupole focusing in the vertical direction for a positively charged particle going into the image plane (forces above and below the center point towards the center) while defocusing in the horizontal direction (forces left and right of the center point away from the center).
If an F quadrupole and a D quadrupole are placed immediately next to each other, their fields completely cancel out (in accordance with Earnshaw's theorem). But if there is a space between them (and the length of this has been correctly chosen), the overall effect is focusing in both horizontal and vertical planes. A lattice can then be built up enabling the transport of the beam over long distances—for example round an entire ring. A common lattice is a FODO lattice consisting of a basis of a focusing quadrupole, 'nothing' (often a bending magnet), a defocusing quadrupole and another length of 'nothing'.
Equations of motion and focal length for charged particles
A charged particle beam in a quadrupole magnetic field will experience a focusing / defocusing force in the transverse direction. This focusing effect is summed up by a focusing strength which depends on the quadrupole gradient as well as the beam's rigidity , where is the electric charge of the particle and
is the relativistic momentum. The focusing strength is given by
,
and particles in the magnetic will behave according to the ODE
.
The same equation will be true for the y direction, but with a minus sign in front of the focusing strength to account for the field changing directions.
Quadrupole ideal field
The components of the ideal magnetic field in the plane transverse to the beam are given by the following (see also multipole magnet).
where is the field gradient of the normal quadrupole component and is the field gradient of the skew quadrupole component. The SI unit of the field gradients are . The field in a normal quadrupole is such that the magnetic poles are arranged with an angle of 45 degrees to the horizontal and vertical planes. The sign of determines whether (for a fixed particle charge and direction) the quadrupole focuses or defocuses particles in the horizontal plane.
See also
Charged particle beam
Dipole magnet
Electron optics
Halbach cylinder
Sextupole magnet
Multipole magnet
Accelerator physics
References
External links
Accelerator physics
Types of magnets | Quadrupole magnet | [
"Physics"
] | 891 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
547,219 | https://en.wikipedia.org/wiki/Dipole%20magnet | A dipole magnet is the simplest type of magnet. It has two poles, one north and one south. Its magnetic field lines form simple closed loops which emerge from the north pole, re-enter at the south pole, then pass through the body of the magnet. The simplest example of a dipole magnet is a bar magnet.
Dipole magnets in accelerators
In particle accelerators, a dipole magnet is the electromagnet used to create a homogeneous magnetic field over some distance. Particle motion in that field will be circular in a plane that is perpendicular to the field and collinear to the direction of particle motion, and free in the direction orthogonal to it.
Thus, a particle injected into a dipole magnet will travel on a circular or helical trajectory. By adding several dipole sections on the same plane, the bending radial effect of the beam increases.
In accelerator physics, dipole magnets are used to realize bends in the design trajectory (or 'orbit') of the particles, as in circular accelerators. Other uses include:
Injection of particles into the accelerator
Ejection of particles from the accelerator
Correction of orbit errors
Production of synchrotron radiation
The force on a charged particle in a particle accelerator from a dipole magnet can be described by the Lorentz force law, where a charged particle experiences a force of
(in SI units). In the case of a particle accelerator dipole magnet, the charged particle beam is bent via the cross product of the particle's velocity and the magnetic field vector, with direction also being dependent on the charge of the particle.
The amount of force that can be applied to a charged particle by a dipole magnet is one of the limiting factors for modern synchrotron and cyclotron proton and ion accelerators. As the energy of the accelerated particles increases, they require more force to change direction and require larger B fields to be steered. Limitations on the amount of B field that can be produced with modern dipole electromagnets require synchrotrons/cyclotrons to increase in size (thus increasing the number of dipole magnets used) to compensate for increases in particle velocity. In the largest modern synchrotron, the Large Hadron Collider, there are 1232 main dipole magnets used for bending the path of the particle beam, each weighing 35 metric tons.
Other uses
Other uses of dipole magnets to deflect moving particles include isotope mass measurement in mass spectrometry, and particle momentum measurement in particle physics.
Such magnets are also used in traditional televisions, which contain a cathode-ray tube, which is essentially a small particle accelerator. Their magnets are called deflecting coils. The magnets move a single spot on the screen of the TV tube in a controlled way all over the screen.
See also
Accelerator physics
Beam line
Cyclotron
Electromagnetism
Linear particle accelerator
Particle accelerator
Quadrupole magnet
Sextupole magnet
Multipole magnet
Storage ring
References
External links
Types of magnets
Accelerator physics | Dipole magnet | [
"Physics"
] | 623 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
547,400 | https://en.wikipedia.org/wiki/Molecular%20mechanics | Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields. Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms.
All-atomistic molecular mechanics methods have the following properties:
Each atom is simulated as one particle
Each particle is assigned a radius (typically the van der Waals radius), polarizability, and a constant net charge (generally derived from quantum calculations and/or experiment)
Bonded interactions are treated as springs with an equilibrium distance equal to the experimental or calculated bond length
Variants on this theme are possible. For example, many simulations have historically used a united-atom representation in which each terminal methyl group or intermediate methylene unit was considered one particle, and large protein systems are commonly simulated using a bead model that assigns two to four particles per amino acid.
Functional form
The following functional abstraction, termed an interatomic potential function or force field in chemistry, calculates the molecular system's potential energy (E) in a given conformation as a sum of individual energy terms.
where the components of the covalent and noncovalent contributions are given by the following summations:
The exact functional form of the potential function, or force field, depends on the particular simulation program being used. Generally the bond and angle terms are modeled as harmonic potentials centered around equilibrium bond-length values derived from experiment or theoretical calculations of electronic structure performed with software which does ab-initio type calculations such as Gaussian. For accurate reproduction of vibrational spectra, the Morse potential can be used instead, at computational cost. The dihedral or torsional terms typically have multiple minima and thus cannot be modeled as harmonic oscillators, though their specific functional form varies with the implementation. This class of terms may include improper dihedral terms, which function as correction factors for out-of-plane deviations (for example, they can be used to keep benzene rings planar, or correct geometry and chirality of tetrahedral atoms in a united-atom representation).
The non-bonded terms are much more computationally costly to calculate in full, since a typical atom is bonded to only a few of its neighbors, but interacts with every other atom in the molecule. Fortunately the van der Waals term falls off rapidly. It is typically modeled using a 6–12 Lennard-Jones potential, which means that attractive forces fall off with distance as r−6 and repulsive forces as r−12, where r represents the distance between two atoms. The repulsive part r−12 is however unphysical, because repulsion increases exponentially. Description of van der Waals forces by the Lennard-Jones 6–12 potential introduces inaccuracies, which become significant at short distances. Generally a cutoff radius is used to speed up the calculation so that atom pairs which distances are greater than the cutoff have a van der Waals interaction energy of zero.
The electrostatic terms are notoriously difficult to calculate well because they do not fall off rapidly with distance, and long-range electrostatic interactions are often important features of the system under study (especially for proteins). The basic functional form is the Coulomb potential, which only falls off as r−1. A variety of methods are used to address this problem, the simplest being a cutoff radius similar to that used for the van der Waals terms. However, this introduces a sharp discontinuity between atoms inside and atoms outside the radius. Switching or scaling functions that modulate the apparent electrostatic energy are somewhat more accurate methods that multiply the calculated energy by a smoothly varying scaling factor from 0 to 1 at the outer and inner cutoff radii. Other more sophisticated but computationally intensive methods are particle mesh Ewald (PME) and the multipole algorithm.
In addition to the functional form of each energy term, a useful energy function must be assigned parameters for force constants, van der Waals multipliers, and other constant terms. These terms, together with the equilibrium bond, angle, and dihedral values, partial charge values, atomic masses and radii, and energy function definitions, are collectively termed a force field. Parameterization is typically done through agreement with experimental values and theoretical calculations results. Norman L. Allinger's force field in the last MM4 version calculate for hydrocarbons heats of formation with a RMS error of 0.35 kcal/mol, vibrational spectra with a RMS error of 24 cm−1, rotational barriers with a RMS error of 2.2°, bond lengths within 0.004 Å and angles within 1°. Later MM4 versions cover also compounds with heteroatoms such as aliphatic amines.
Each force field is parameterized to be internally consistent, but the parameters are generally not transferable from one force field to another.
Areas of application
The main use of molecular mechanics is in the field of molecular dynamics. This uses the force field to calculate the forces acting on each particle and a suitable integrator to model the dynamics of the particles and predict trajectories. Given enough sampling and subject to the ergodic hypothesis, molecular dynamics trajectories can be used to estimate thermodynamic parameters of a system or probe kinetic properties, such as reaction rates and mechanisms.
Molecular mechanics is also used within QM/MM, which allows study of proteins and enzyme kinetics. The system is divided into two regions—one of which is treated with quantum mechanics (QM) allowing breaking and formation of bonds and the rest of the protein is modeled using molecular mechanics (MM). MM alone does not allow the study of mechanisms of enzymes, which QM allows. QM also produces more exact energy calculation of the system although it is much more computationally expensive.
Another application of molecular mechanics is energy minimization, whereby the force field is used as an optimization criterion. This method uses an appropriate algorithm (e.g. steepest descent) to find the molecular structure of a local energy minimum. These minima correspond to stable conformers of the molecule (in the chosen force field) and molecular motion can be modelled as vibrations around and interconversions between these stable conformers. It is thus common to find local energy minimization methods combined with global energy optimization, to find the global energy minimum (and other low energy states). At finite temperature, the molecule spends most of its time in these low-lying states, which thus dominate the molecular properties. Global optimization can be accomplished using simulated annealing, the Metropolis algorithm and other Monte Carlo methods, or using different deterministic methods of discrete or continuous optimization. While the force field represents only the enthalpic component of free energy (and only this component is included during energy minimization), it is possible to include the entropic component through the use of additional methods, such as normal mode analysis.
Molecular mechanics potential energy functions have been used to calculate binding constants, protein folding kinetics, protonation equilibria, active site coordinates, and to design binding sites.
Environment and solvation
In molecular mechanics, several ways exist to define the environment surrounding a molecule or molecules of interest. A system can be simulated in vacuum (termed a gas-phase simulation) with no surrounding environment, but this is usually undesirable because it introduces artifacts in the molecular geometry, especially in charged molecules. Surface charges that would ordinarily interact with solvent molecules instead interact with each other, producing molecular conformations that are unlikely to be present in any other environment. The most accurate way to solvate a system is to place explicit water molecules in the simulation box with the molecules of interest and treat the water molecules as interacting particles like those in the other molecule(s). A variety of water models exist with increasing levels of complexity, representing water as a simple hard sphere (a united-atom model), as three separate particles with fixed bond angle, or even as four or five separate interaction centers to account for unpaired electrons on the oxygen atom. As water models grow more complex, related simulations grow more computationally intensive. A compromise method has been found in implicit solvation, which replaces the explicitly represented water molecules with a mathematical expression that reproduces the average behavior of water molecules (or other solvents such as lipids). This method is useful to prevent artifacts that arise from vacuum simulations and reproduces bulk solvent properties well, but cannot reproduce situations in which individual water molecules create specific interactions with a solute that are not well captured by the solvent model, such as water molecules that are part of the hydrogen bond network within a protein.
Software packages
This is a limited list; many more packages are available.
See also
References
Literature
External links
Molecular dynamics simulation methods revised
Molecular mechanics - it is simple
Molecular physics
Computational chemistry
Intermolecular forces
Molecular modelling | Molecular mechanics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,837 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Theoretical chemistry",
"Computational chemistry",
"Molecular modelling",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
548,018 | https://en.wikipedia.org/wiki/Oxidoreductase | In biochemistry, an oxidoreductase is an enzyme that catalyzes the transfer of electrons from one molecule, the reductant, also called the electron donor, to another, the oxidant, also called the electron acceptor. This group of enzymes usually utilizes NADP+ or NAD+ as cofactors. Transmembrane oxidoreductases create electron transport chains in bacteria, chloroplasts and mitochondria, including respiratory complexes I, II and III. Some others can associate with biological membranes as peripheral membrane proteins or be anchored to the membranes through a single transmembrane helix.
Reactions
For example, an enzyme that catalyzed this reaction would be an oxidoreductase:
A– + B → A + B–
In this example, A is the reductant (electron donor) and B is the oxidant (electron acceptor).
In biochemical reactions, the redox reactions are sometimes more difficult to see, such as this reaction from glycolysis:
Pi + glyceraldehyde-3-phosphate + NAD+ → NADH + H+ + 1,3-bisphosphoglycerate
In this reaction, NAD+ is the oxidant (electron acceptor), and glyceraldehyde-3-phosphate is the reductant (electron donor).
Nomenclature
Proper names of oxidoreductases are formed as "donor:acceptor oxidoreductase"; however, other names are much more common.
The common name is "donor dehydrogenase" when possible, such as glyceraldehyde-3-phosphate dehydrogenase for the second reaction above.
Common names are also sometimes formed as "acceptor reductase", such as NAD+ reductase.
"Donor oxidase" is a special case where O2 is the acceptor.
Classification
Oxidoreductases are classified as EC 1 in the EC number classification of enzymes. Oxidoreductases can be further classified into 21 subclasses:
EC 1.1 includes oxidoreductases that act on the CH-OH group of donors (alcohol oxidoreductases such as methanol dehydrogenase)
EC 1.2 includes oxidoreductases that act on the aldehyde or oxo group of donors
EC 1.3 includes oxidoreductases that act on the CH-CH group of donors (CH-CH oxidoreductases)
EC 1.4 includes oxidoreductases that act on the CH-NH2 group of donors (Amino acid oxidoreductases, Monoamine oxidase)
EC 1.5 includes oxidoreductases that act on CH-NH group of donors
EC 1.6 includes oxidoreductases that act on NADH or NADPH
EC 1.7 includes oxidoreductases that act on other nitrogenous compounds as donors
EC 1.8 includes oxidoreductases that act on a sulfur group of donors
EC 1.9 includes oxidoreductases that act on a heme group of donors
EC 1.10 includes oxidoreductases that act on diphenols and related substances as donors
EC 1.11 includes oxidoreductases that act on peroxide as an acceptor (peroxidases)
EC 1.12 includes oxidoreductases that act on hydrogen as donors
EC 1.13 includes oxidoreductases that act on single donors with incorporation of molecular oxygen (oxygenases)
EC 1.14 includes oxidoreductases that act on paired donors with incorporation of molecular oxygen
EC 1.15 includes oxidoreductases that act on superoxide radicals as acceptors
EC 1.16 includes oxidoreductases that oxidize metal ions
EC 1.17 includes oxidoreductases that act on CH or CH2 groups
EC 1.18 includes oxidoreductases that act on iron-sulfur proteins as donors
EC 1.19 includes oxidoreductases that act on reduced flavodoxin as a donor
EC 1.20 includes oxidoreductases that act on phosphorus or arsenic in donors
EC 1.21 includes oxidoreductases that act on X-H and Y-H to form an X-Y bond
See also
Hydroxylase
List of enzymes
References
External links
EC 1 Introduction from the Department of Chemistry at Queen Mary, University of London
Bioinorganic chemistry | Oxidoreductase | [
"Chemistry",
"Biology"
] | 957 | [
"Biochemistry",
"Oxidoreductases",
"Bioinorganic chemistry"
] |
548,075 | https://en.wikipedia.org/wiki/Star%20lifting | Star lifting is any of several hypothetical processes by which a sufficiently advanced civilization (specifically, one of Kardashev-II or higher) could remove a substantial portion of a star's matter which can then be re-purposed, while possibly optimizing the star's energy output and lifespan at the same time. The term appears to have been coined by David Criswell.
Stars already lose a small flow of mass via solar wind, coronal mass ejections, and other natural processes. Over the course of a star's life on the main sequence this loss is usually negligible compared to the star's total mass; only at the end of a star's life when it becomes a red giant or a supernova is a large proportion of material ejected. The star lifting techniques that have been proposed would operate by increasing this natural plasma flow and manipulating it with magnetic fields.
Stars have deep gravity wells, so the energy required for such operations is large. For example, lifting solar material from the surface of the Sun to the planet Mercury requires 1.6 × 1013 J/kg. This energy could be supplied by the star itself, collected by a Dyson sphere; using 10% of the Sun's total power output would allow 5.9 × 1021 kilograms of matter to be lifted per year (0.0000003% of the Sun's total mass), or 8% of the mass of Earth's moon.
Methods for lifting material
Thermal-driven outflow
The simplest system for star lifting would increase the rate of solar wind outflow by directly heating small regions of the star's atmosphere, using any of a number of different means to deliver energy such as microwave beams, lasers, or particle beams – whatever proved to be most efficient for the engineers of the system. This would produce a large and sustained eruption similar to a solar flare at the target location, feeding the solar wind.
The resulting outflow would be collected by using a ring current around the star's equator to generate a powerful toroidal magnetic field with its dipoles over the star's rotational poles. This would deflect the star's solar wind into a pair of jets aligned along its rotational axis passing through a pair of magnetic rocket nozzles. The magnetic nozzles would convert some of the plasma's thermal energy into outward velocity, helping cool the outflow. The ring current required to generate this magnetic field would be generated by a ring of particle accelerator space stations in close orbit around the star's equator. These accelerators would be physically separate from each other but would exchange two counterdirected beams of oppositely charged ions with their neighbor on each side, forming a complete circuit around the star.
"Huff-n-Puff"
David Criswell proposed a modification to the polar jet system in which the magnetic field could be used to increase solar wind outflow directly, without requiring additional heating of the star's surface. He dubbed it the "Huff-n-Puff" method, inspired from the Big Bad Wolf's threats in the fairy tale of Three Little Pigs.
In this system the ring of particle accelerators would not be in orbit, instead depending on the outward force of the magnetic field itself for support against the star's gravity. To inject energy into the star's atmosphere the ring current would first be temporarily shut down, allowing the particle accelerator stations to begin falling freely toward the star's surface. Once the stations had developed sufficient inward velocity the ring current would be reactivated and the resulting magnetic field would be used to reverse the stations' fall. This would "squeeze" the star, propelling stellar atmosphere through the polar magnetic nozzles. The ring current would be shut down again before the ring stations achieved enough outward velocity to throw them too far away from the star, and the star's gravity would be allowed to pull them back inward to repeat the cycle.
A single set of ring stations would result in a very intermittent flow. It is possible to smooth this flow out by using multiple sets of ring stations, with each set operating in a different stage of the Huff-n-Puff cycle at any given moment so that there is always one ring "squeezing". This would also smooth out the power requirements of the system over time.
Centrifugal acceleration
An alternative to the Huff-n-Puff method for using the toroidal magnetic field to increase solar wind outflow involves placing the ring stations in a polar orbit rather than an equatorial one. The two magnetic nozzles would then be located on the star's equator. To increase the rate of outflow through these two equatorial jets, the ring system would be rotated around the star at a rate significantly faster than the star's natural rotation. This would cause the stellar atmosphere swept up by the magnetic field to be flung outward.
This method suffers from a number of significant complications compared to the others. Rotating the ring in this manner would require the ring stations to use powerful rocket thrust, requiring both large rocket systems and a large amount of reaction mass. This reaction mass can be "recycled" by directing the rockets' exhausts so that it impacts the star's surface, but harvesting fresh reaction mass from the star's outflow and delivering it to the ring stations in sufficient quantity adds still more complexity to the system. Finally, the resulting jets would spiral outward from the star's equator rather than emerging straight from the poles; this could complicate harvesting it, as well as the arrangement of the Dyson sphere powering the system.
Harvesting lifted mass
The material lifted from a star will emerge in the form of plasma jets hundreds or thousands of astronomical units long, primarily composed of hydrogen and helium and highly diffuse by current engineering standards. The details of extracting useful materials from this stream and storing the vast quantities that would result have not been extensively explored. One possible approach is to purify useful elements from the jets using extremely large-scale mass spectrometry, cool them by laser cooling, and condense them on particles of dust for collection. An alternative method could involve using large solenoids to slow the jets down and separate out the components. Electricity would also be generated via this system. Small artificial gas giant planets could be constructed from excess hydrogen and helium to store it for future use. Excess gas could also be used to build new earthlike planets to custom specifications.
In the case of the Solar System, one possible use for material harvested from the Sun would be to add it to Jupiter. Increasing Jupiter's mass about 100-fold would turn it into a star, allowing it to supply energy to its moons and also to the asteroid belt. However, this would have to be done carefully to avoid catastrophically changing the orbits of other bodies in the Solar System.
Stellar husbandry
The lifespan of a star is determined by the size of its supply of nuclear "fuel" and the rate at which it uses up that fuel in fusion reactions in its core. Although larger stars have a larger supply of fuel, the increased core pressure resulting from that additional mass vastly increases the burn rate; thus large stars have a significantly shorter lifespan than small ones. Current theories of stellar dynamics also suggest that there is very little mixing between the bulk of a star's atmosphere and the material of its core, where fusion takes place, so most of a large star's fuel will never be used naturally. Small red dwarf stars, which are naturally fully convective, allow their core helium to mix with the outer layers of hydrogen which allows extremely long stellar lifespans on the order of trillions of years.
As a star's mass is reduced by star lifting its rate of nuclear fusion will decrease, reducing the amount of energy available to the star lifting process but also reducing the gravity that needs to be overcome. Theoretically, it would be possible to remove an arbitrarily large portion of a star's total mass given sufficient time. In this manner a civilization could control the rate at which its star uses fuel, optimizing the star's power output and lifespan to its needs. The hydrogen and helium extracted in the process could itself be utilized to fuel fusion reactors. Alternatively, the material could be assembled into additional smaller stars, to improve the efficiency of its use. Theoretically, most of the energy stored in the matter lifted from a star could be harvested if it is made into small black holes, via the mechanism of Hawking radiation.
In fiction
In the series Stargate Universe, the Ancient ship Destiny and the seed ships sent 2,000 years before Destiny are fueled by plasma from stars. The ship skims over the surface of a star just before dipping below the star's photosphere to scoop in plasma using its retractable collectors.
In the Star Wars franchise of Knights of the Old Republic, the Star Forge is capable of star lifting. In a way, Starkiller Base in the seventh canonical film star lifts in order to power its planet-destroying laser cannon, although it consumes the entire star to do so.
The novel Star Trek: Voyager – The Murdered Sun featured a reptilian race using the material from a star to sustain the opening of a wormhole. However, the novel depicted the process as shortening the star's lifespan precipitously rather than extending it.
In The Night's Dawn Trilogy by Peter F. Hamilton, the alien species the Kiint created an arc of custom made planets around their sun from mass extracted from their star.
In the Doctor Who episode "42" the crew of the starship Pentallian use a Sun scoop to draw matter from a star to use as fuel for their ship.
In the novella Palimpsest by Charles Stross, the Stasis uses star lifting to replace the core of the Sun with a black hole, producing a "necrostar" with vastly expanded lifespan.
In the novel The Time Ships by Stephen Baxter, the Morlocks create a Dyson sphere inside the orbit of Earth using matter lifted from the Sun.
In the short story The Golden Apples of the Sun by Ray Bradbury, humans fly the rocket Copa de Oro to the Sun and dip a mechanical cup into it to capture the star's warmth for Earth.
In the video game series Destiny, the mechanical race known as the Vex use star lifts to artificially extend the life of their Forge Star 2082 Volantis.
References
Interstellar Migration and the Human Experience, editors Ben R. Finney and Eric M. Jones, University of California Press, , Chapter 4: Solar System Industrialization, by David R. Criswell
Star Lifting by Isaac Arthur
Hypothetical technology
Megastructures
Lifting | Star lifting | [
"Technology"
] | 2,167 | [
"Exploratory engineering",
"Megastructures"
] |
2,021,419 | https://en.wikipedia.org/wiki/Relativistic%20mechanics | In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics.
As with classical mechanics, the subject can be divided into "kinematics"; the description of motion by specifying positions, velocities and accelerations, and "dynamics"; a full description by considering energies, momenta, and angular momenta and their conservation laws, and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by "statics" in classical mechanics—depends on the relative motion of observers who measure in frames of reference.
Some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum (Newton's second law), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done. However, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames. In addition to modifying notions of space and time, SR forces one to reconsider the concepts of mass, momentum, and energy all of which are important constructs in Newtonian mechanics. SR shows that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated. Consequently, another modification is the concept of the center of mass of a system, which is straightforward to define in classical mechanics but much less obvious in relativity – see relativistic center of mass for details.
The equations become more complicated in the more familiar three-dimensional vector calculus formalism, due to the nonlinearity in the Lorentz factor, which accurately accounts for relativistic velocity dependence and the speed limit of all particles and fields. However, they have a simpler and elegant form in four-dimensional spacetime, which includes flat Minkowski space (SR) and curved spacetime (GR), because three-dimensional vectors derived from space and scalars derived from time can be collected into four vectors, or four-dimensional tensors. The six-component angular momentum tensor is sometimes called a bivector because in the 3D viewpoint it is two vectors (one of these, the conventional angular momentum, being an axial vector).
Relativistic kinematics
The relativistic four-velocity, that is the four-vector representing velocity in relativity, is defined as follows:
In the above, is the proper time of the path through spacetime, called the world-line, followed by the object velocity the above represents, and
is the four-position; the coordinates of an event. Due to time dilation, the proper time is the time between two events in a frame of reference where they take place at the same location. The proper time is related to coordinate time t by:
where is the Lorentz factor:
(either version may be quoted) so it follows:
The first three terms, excepting the factor of , is the velocity as seen by the observer in their own reference frame. The is determined by the velocity between the observer's reference frame and the object's frame, which is the frame in which its proper time is measured. This quantity is invariant under Lorentz transformation, so to check to see what an observer in a different reference frame sees, one simply multiplies the velocity four-vector by the Lorentz transformation matrix between the two reference frames.
Relativistic dynamics
Rest mass and relativistic mass
The mass of an object as measured in its own frame of reference is called its rest mass or invariant mass and is sometimes written . If an object moves with velocity in some other reference frame, the quantity is often called the object's "relativistic mass" in that frame.
Some authors use to denote rest mass, but for the sake of clarity this article will follow the convention of using for relativistic mass and for rest mass.
Lev Okun has suggested that the concept of relativistic mass "has no rational justification today" and should no longer be taught.
Other physicists, including Wolfgang Rindler and T. R. Sandin, contend that the concept is useful.
See mass in special relativity for more information on this debate.
A particle whose rest mass is zero is called massless. Photons and gravitons are thought to be massless, and neutrinos are nearly so.
Relativistic energy and momentum
There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws. If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy, one sees that these quantities are not conserved in SR. One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities. It is these new definitions which are taken as the correct ones for momentum and energy in SR.
The four-momentum of an object is straightforward, identical in form to the classical momentum, but replacing 3-vectors with 4-vectors:
The energy and momentum of an object with invariant mass , moving with velocity with respect to a given frame of reference, are respectively given by
The factor comes from the definition of the four-velocity described above. The appearance of may be stated in an alternative way, which will be explained in the next section.
The kinetic energy, , is defined as
and the speed as a function of kinetic energy is given by
The spatial momentum may be written as , preserving the form from Newtonian mechanics with relativistic mass substituted for Newtonian mass. However, this substitution fails for some quantities, including force and kinetic energy. Moreover, the relativistic mass is not invariant under Lorentz transformations, while the rest mass is. For this reason, many people prefer to use the rest mass and account for explicitly through the 4-velocity or coordinate time.
A simple relation between energy, momentum, and velocity may be obtained from the definitions of energy and momentum by multiplying the energy by , multiplying the momentum by , and noting that the two expressions are equal. This yields
may then be eliminated by dividing this equation by and squaring,
dividing the definition of energy by and squaring,
and substituting:
This is the relativistic energy–momentum relation.
While the energy and the momentum depend on the frame of reference in which they are measured, the quantity is invariant. Its value is times the squared magnitude of the 4-momentum vector.
The invariant mass of a system may be written as
Due to kinetic energy and binding energy, this quantity is different from the sum of the rest masses of the particles of which the system is composed. Rest mass is not a conserved quantity in special relativity, unlike the situation in Newtonian physics. However, even if an object is changing internally, so long as it does not exchange energy or momentum with its surroundings, its rest mass will not change and can be calculated with the same result in any reference frame.
Mass–energy equivalence
The relativistic energy–momentum equation holds for all particles, even for massless particles for which m0 = 0. In this case:
When substituted into Ev = c2p, this gives v = c: massless particles (such as photons) always travel at the speed of light.
Notice that the rest mass of a composite system will generally be slightly different from the sum of the rest masses of its parts since, in its rest frame, their kinetic energy will increase its mass and their (negative) binding energy will decrease its mass. In particular, a hypothetical "box of light" would have rest mass even though made of particles which do not since their momenta would cancel.
Looking at the above formula for invariant mass of a system, one sees that, when a single massive object is at rest (v = 0, p = 0), there is a non-zero mass remaining: m0 = E/c2.
The corresponding energy, which is also the total energy when a single particle is at rest, is referred to as "rest energy". In systems of particles which are seen from a moving inertial frame, total energy increases and so does momentum. However, for single particles the rest mass remains constant, and for systems of particles the invariant mass remain constant, because in both cases, the energy and momentum increases subtract from each other, and cancel. Thus, the invariant mass of systems of particles is a calculated constant for all observers, as is the rest mass of single particles.
The mass of systems and conservation of invariant mass
For systems of particles, the energy–momentum equation requires summing the momentum vectors of the particles:
The inertial frame in which the momenta of all particles sums to zero is called the center of momentum frame. In this special frame, the relativistic energy–momentum equation has p = 0, and thus gives the invariant mass of the system as merely the total energy of all parts of the system, divided by c2
This is the invariant mass of any system which is measured in a frame where it has zero total momentum, such as a bottle of hot gas on a scale. In such a system, the mass which the scale weighs is the invariant mass, and it depends on the total energy of the system. It is thus more than the sum of the rest masses of the molecules, but also includes all the totaled energies in the system as well. Like energy and momentum, the invariant mass of isolated systems cannot be changed so long as the system remains totally closed (no mass or energy allowed in or out), because the total relativistic energy of the system remains constant so long as nothing can enter or leave it.
An increase in the energy of such a system which is caused by translating the system to an inertial frame which is not the center of momentum frame, causes an increase in energy and momentum without an increase in invariant mass. E = m0c2, however, applies only to isolated systems in their center-of-momentum frame where momentum sums to zero.
Taking this formula at face value, we see that in relativity, mass is simply energy by another name (and measured in different units). In 1927 Einstein remarked about special relativity, "Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy."
Closed (isolated) systems
In a "totally-closed" system (i.e., isolated system) the total energy, the total momentum, and hence the total invariant mass are conserved. Einstein's formula for change in mass translates to its simplest ΔE = Δmc2 form, however, only in non-closed systems in which energy is allowed to escape (for example, as heat and light), and thus invariant mass is reduced. Einstein's equation shows that such systems must lose mass, in accordance with the above formula, in proportion to the energy they lose to the surroundings. Conversely, if one can measure the differences in mass between a system before it undergoes a reaction which releases heat and light, and the system after the reaction when heat and light have escaped, one can estimate the amount of energy which escapes the system.
Chemical and nuclear reactions
In both nuclear and chemical reactions, such energy represents the difference in binding energies of electrons in atoms (for chemistry) or between nucleons in nuclei (in atomic reactions). In both cases, the mass difference between reactants and (cooled) products measures the mass of heat and light which will escape the reaction, and thus (using the equation) give the equivalent energy of heat and light which may be emitted if the reaction proceeds.
In chemistry, the mass differences associated with the emitted energy are around 10−9 of the molecular mass. However, in nuclear reactions the energies are so large that they are associated with mass differences, which can be estimated in advance, if the products and reactants have been weighed (atoms can be weighed indirectly by using atomic masses, which are always the same for each nuclide). Thus, Einstein's formula becomes important when one has measured the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have stored energy that can be released by certain nuclear reactions, providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb. Historically, for example, Lise Meitner was able to use the mass differences in nuclei to estimate that there was enough energy available to make nuclear fission a favorable process. The implications of this special form of Einstein's formula have thus made it one of the most famous equations in all of science.
Center of momentum frame
The equation E = m0c2 applies only to isolated systems in their center of momentum frame. It has been popularly misunderstood to mean that mass may be converted to energy, after which the mass disappears. However, popular explanations of the equation as applied to systems include open (non-isolated) systems for which heat and light are allowed to escape, when they otherwise would have contributed to the mass (invariant mass) of the system.
Historically, confusion about mass being "converted" to energy has been aided by confusion between mass and "matter", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered "matter". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass.
For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent "window" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them.
Angular momentum
In relativistic mechanics, the time-varying mass moment
and orbital 3-angular momentum
of a point-like particle are combined into a four-dimensional bivector in terms of the 4-position X and the 4-momentum P of the particle:
where ∧ denotes the exterior product. This tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system. So, for an assembly of discrete particles one sums the angular momentum tensors over the particles, or integrates the density of angular momentum over the extent of a continuous mass distribution.
Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields.
Force
In special relativity, Newton's second law does not hold in the form F = ma, but it does if it is expressed as
where p = γ(v)m0v is the momentum as defined above and m0 is the invariant mass. Thus, the force is given by
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation
|-
|
Starting from
Carrying out the derivatives gives
If the acceleration is separated into the part parallel to the velocity (a∥) and the part perpendicular to it (a⊥), so that:
one gets
By construction a∥ and v are parallel, so (v·a∥)v is a vector with magnitude v2a∥ in the direction of v (and hence a∥) which allows the replacement:
then
|}
Consequently, in some old texts, γ(v)3m0 is referred to as the longitudinal mass, and γ(v)m0 is referred to as the transverse mass, which is numerically the same as the relativistic mass. See mass in special relativity.
If one inverts this to calculate acceleration from force, one gets
The force described in this section is the classical 3-D force which is not a four-vector. This 3-D force is the appropriate concept of force since it is the force which obeys Newton's third law of motion. It should not be confused with the so-called four-force which is merely the 3-D force in the comoving frame of the object transformed as if it were a four-vector. However, the density of 3-D force (linear momentum transferred per unit four-volume) is a four-vector (density of weight +1) when combined with the negative of the density of power transferred.
Torque
The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time:
or in tensor components:
where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass.
Kinetic energy
The work-energy theorem says the change in kinetic energy is equal to the work done on the body. In special relativity:
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Derivation
|-
|
|}
If in the initial state the body was at rest, so v0 = 0 and γ0(v0) = 1, and in the final state it has speed v1 = v, setting γ1(v1) = γ(v), the kinetic energy is then;
a result that can be directly obtained by subtracting the rest energy m0c2 from the total relativistic energy γ(v)m0c2.
Newtonian limit
The Lorentz factor γ(v) can be expanded into a Taylor series or binomial series for (v/c)2 < 1, obtaining:
and consequently
For velocities much smaller than that of light, one can neglect the terms with c2 and higher in the denominator. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities.
See also
Twin paradox
Relativistic equations
Relativistic heat conduction
Classical electromagnetism and special relativity
Relativistic system (mathematics)
Relativistic Lagrangian mechanics
References
Notes
Further reading
General scope and special/general relativity
Concepts of Modern Physics (4th Edition), A. Beiser, Physics, McGraw-Hill (International), 1987,
Electromagnetism and special relativity
Classical mechanics and special relativity
General relativity
Theory of relativity | Relativistic mechanics | [
"Physics"
] | 4,202 | [
"Theory of relativity"
] |
2,021,871 | https://en.wikipedia.org/wiki/Planar%20chirality | Planar chirality, also known as 2D chirality, is the special case of chirality for two dimensions.
Most fundamentally, planar chirality is a mathematical term, finding use in chemistry, physics and related physical sciences, for example, in astronomy, optics and metamaterials. Recent occurrences in latter two fields are dominated by microwave and terahertz applications as well as micro- and nanostructured planar interfaces for infrared and visible light.
In chemistry
This term is used in chemistry contexts, e.g., for a chiral molecule lacking an asymmetric carbon atom, but possessing two non-coplanar rings that are each dissymmetric and which cannot easily rotate about the chemical bond connecting them: 2,2'-dimethylbiphenyl is perhaps the simplest example of this case. Planar chirality is also exhibited by molecules like (E)-cyclooctene, some di- or poly-substituted metallocenes, and certain monosubstituted paracyclophanes. Nature rarely provides planar chiral molecules, cavicularin being an exception.
Assigning the configuration of planar chiral molecules
To assign the configuration of a planar chiral molecule, begin by selecting the pilot atom, which is the highest priority of the atoms that is not in the plane, but is directly attached to an atom in the plane. Next, assign the priority of the three adjacent in-plane atoms, starting with the atom attached to the pilot atom as priority 1, and preferentially assigning in order of highest priority if there is a choice. Then set the pilot atom to in front of the three atoms in question. If the three atoms reside in a clockwise direction when followed in order of priority, the molecule is assigned as R; when counterclockwise it is assigned as S.
In optics and metamaterials
Chiral diffraction
Papakostas et al. observed in 2003 that planar chirality affects the polarization of light diffracted by arrays of planar chiral microstructures, where large polarization changes of opposite sign were detected in light diffracted from planar structures of opposite handedness.
Circular conversion dichroism
The study of planar chiral metamaterials has revealed that planar chirality is also associated with an optical effect in non-diffracting structures: the directionally asymmetric transmission (reflection and absorption) of circularly polarized waves. Planar chiral metamaterials, which are also anisotropic and lossy exhibit different total transmission (reflection and absorption) levels for the same circularly polarized wave incident on their front and back.
The asymmetric transmission phenomenon arises from different, e.g. left-to-right, circular polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as circular conversion dichroism.
Like the twist of a planar chiral pattern appears reversed for opposite directions of observation, planar chiral metamaterials have interchanged properties for left-handed and right-handed circularly polarized waves that are incident on their front and back. In particular left-handed and right-handed circularly polarized waves experience opposite directional transmission (reflection and absorption) asymmetries.
Extrinsic planar chirality
Achiral components may form a chiral arrangement. In this case, chirality is not an intrinsic property of the components, but rather imposed extrinsically by their relative positions and orientations. This concept is typically applied to experimental arrangements, for example, an achiral (meta)material illuminated by a beam of light, where the illumination direction makes the whole experiment different from its mirror image. Extrinsic planar chirality results from illumination of any periodically structured interface for suitable illumination directions. Starting from normal incidence onto a periodically structured interface, extrinsic planar chirality arises from tilting the interface around any axis that does not coincide with a line of mirror symmetry of the interface. In the presence of losses, extrinsic planar chirality can result in circular conversion dichroism, as described above.
Chiral mirrors
Conventional mirrors reverse the handedness of circularly polarized waves upon reflection. In contrast, a chiral mirror reflects circularly polarized waves of one handedness without handedness change, while absorbing circularly polarized waves of the opposite handedness. A perfect chiral mirror exhibits circular conversion dichroism with ideal efficiency. Chiral mirrors can be realized by placing a planar chiral metamaterial in front of a conventional mirror. The concept has been exploited in holography to realize independent holograms for left-handed and right-handed circularly polarized electromagnetic waves. Active chiral mirrors that can be switched between left and right, or chiral mirror and conventional mirror, have been reported.
See also
Metamaterial
Chirality (electromagnetism)
References
Stereochemistry
Chirality | Planar chirality | [
"Physics",
"Chemistry",
"Biology"
] | 1,048 | [
"Pharmacology",
"Origin of life",
"Biochemistry",
"Stereochemistry",
"Chirality",
"Space",
"nan",
"Asymmetry",
"Biological hypotheses",
"Spacetime",
"Symmetry"
] |
2,022,061 | https://en.wikipedia.org/wiki/Axial%20chirality | In chemistry, axial chirality is a special case of chirality in which a molecule contains two pairs of chemical groups in a non-planar arrangement about an axis of chirality so that the molecule is not superposable on its mirror image. The axis of chirality (or chiral axis) is usually determined by a chemical bond that is constrained against free rotation either by steric hindrance of the groups, as in substituted biaryl compounds such as BINAP, or by torsional stiffness of the bonds, as in the C=C double bonds in allenes such as glutinic acid. Axial chirality is most commonly observed in substituted biaryl compounds wherein the rotation about the aryl–aryl bond is restricted so it results in chiral atropisomers, as in various ortho-substituted biphenyls, and in binaphthyls such as BINAP.
Axial chirality differs from central chirality (point chirality) in that axial chirality does not require a chiral center such as an asymmetric carbon atom, the most common form of chirality in organic compounds. Bonding to asymmetric carbon has the form Cabcd where a, b, c, and d must be distinct groups. Allenes have the form and the groups need not all be distinct as long as groups in each pair are distinct: abC=C=Cab is sufficient for the compound to be chiral, as in penta-2,3-dienedioic acid. Similarly, chiral atropisomers of the form may have some identical groups (), as in BINAP.
Nomenclature
The enantiomers of axially chiral compounds are usually given the stereochemical labels (Ra) and (Sa), sometimes abbreviated (R) and (S). The designations are based on the same Cahn–Ingold–Prelog priority rules used for tetrahedral stereocenters. The chiral axis is viewed end-on and the two "near" and two "far" substituents on the axial unit are ranked, but with the additional rule that the two near substituents have higher priority than the far ones.
Helical chirality
The chirality of a molecule that has a helical, propeller, or screw-shaped geometry is called helicity or helical chirality. The screw axis or the Dn, or Cn principle symmetry axis is considered to be the axis of chirality. Some sources consider helical chirality to be a type of axial chirality, and some do not. IUPAC does not refer to helicity as a type of axial chirality.
Enantiomers having helicity may labeled by using the prefix notation (P) ("plus") or Δ (from Latin dexter, "right") for a right-handed helix, and (M) ("minus") or Λ (Latin levo, "left") for a left-handed helix. The P/M or Δ/Λ terminology is used particularly for molecules that actually resemble a helix, such as the helicenes. This notation can also be applied to non-helical structures having axial chirality by considering the helical orientation of the Cahn–Ingold–Prelog group rankings of the "front" groups compared to the "back", when viewed from either direction along the axis.
External links
Axial Chirality in 6,6′-Dinitrobiphenyl-2,2′-dicarboxylic acid 3D representation.
References
Stereochemistry | Axial chirality | [
"Physics",
"Chemistry"
] | 762 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
2,022,356 | https://en.wikipedia.org/wiki/Microstrip | Microstrip is a type of electrical transmission line which can be fabricated with any technology where a conductor is separated from a ground plane by a dielectric layer known as "substrate". Microstrip lines are used to convey microwave-frequency signals.
Typical realisation technologies are printed circuit board (PCB), alumina coated with a dielectric layer or sometimes silicon or some other similar technologies. Microwave components such as antennas, couplers, filters, power dividers etc. can be formed from microstrip, with the entire device existing as the pattern of metallization on the substrate. Microstrip is thus much less expensive than traditional waveguide technology, as well as being far lighter and more compact. Microstrip was developed by ITT laboratories as a competitor to stripline (first published by Grieg and Engelmann in the December 1952 IRE proceedings).
The disadvantages of microstrips compared to waveguides are the generally lower power handling capacity, and higher losses. Also, unlike waveguides, microstrips are typically not enclosed, and are therefore susceptible to cross-talk and unintentional radiation.
For lowest cost, microstrip devices may be built on an ordinary FR-4 (standard PCB) substrate. However it is often found that the dielectric losses in FR4 are too high at microwave frequencies, and that the dielectric constant is not sufficiently tightly controlled. For these reasons, an alumina substrate is commonly used. From monolithic integration perspective microstrips with integrated circuit/monolithic microwave integrated circuit technologies might be feasible however their performance might be limited by the dielectric layer(s) and conductor thickness available.
Microstrip lines are also used in high-speed digital PCB designs, where signals need to be routed from one part of the assembly to another with minimal distortion, and avoiding high cross-talk and radiation.
Microstrip is one of many forms of planar transmission line, others include stripline and coplanar waveguide, and it is possible to integrate all of these on the same substrate.
A differential microstrip—a balanced signal pair of microstrip lines—is often used for high-speed signals such as DDR2 SDRAM clocks, USB Hi-Speed data lines, PCI Express data lines, LVDS data lines, etc., often all on the same PCB. Most PCB design tools support such differential pairs.
Inhomogeneity
The electromagnetic wave carried by a microstrip line exists partly in the dielectric substrate, and partly in the air above it. In general, the dielectric constant of the substrate will be different (and greater) than that of the air, so that the wave is travelling in an inhomogeneous medium. In consequence, the propagation velocity is somewhere between the speed of radio waves in the substrate, and the speed of radio waves in air. This behaviour is commonly described by stating the effective dielectric constant of the microstrip; this being the dielectric constant of an equivalent homogeneous medium (i.e., one resulting in the same propagation velocity).
Further consequences of an inhomogeneous medium include:
The line will not support a true TEM wave; at non-zero frequencies, both the E and H fields will have longitudinal components (a hybrid mode). The longitudinal components are small however, and so the dominant mode is referred to as quasi-TEM.
The line is dispersive. With increasing frequency, the effective dielectric constant gradually climbs towards that of the substrate, so that the phase velocity gradually decreases. This is true even with a non-dispersive substrate material (the substrate dielectric constant will usually fall with increasing frequency).
The characteristic impedance of the line changes slightly with frequency (again, even with a non-dispersive substrate material). The characteristic impedance of non-TEM modes is not uniquely defined, and depending on the precise definition used, the impedance of microstrip either rises, falls, or falls then rises with increasing frequency. The low-frequency limit of the characteristic impedance is referred to as the quasi-static characteristic impedance, and is the same for all definitions of characteristic impedance.
The wave impedance varies over the cross-section of the line.
Microstrip lines radiate and discontinuity elements such as stubs and posts, which would be pure reactances in stripline, have a small resistive component due to the radiation from them.
Characteristic impedance
A closed-form approximate expression for the quasi-static characteristic impedance of a microstrip line was developed by Wheeler:
where is the effective width, which is the actual width of the strip, plus a correction to account for the non-zero thickness of the metallization:
Here is the impedance of free space, is the relative permittivity of substrate, is the width of the strip, is the thickness ("height") of substrate, and is the thickness of the strip metallization.
This formula is asymptotic to an exact solution in three different cases:
, any (parallel plate transmission line),
, (wire above a ground-plane), and
, .
It is claimed that for most other cases, the error in impedance is less than 1%, and is always less than 2%. By covering all aspect-ratios in one formula, Wheeler 1977 improves on Wheeler 1965 which gives one formula for and another for (thus introducing a discontinuity in the result at ).
Harold Wheeler disliked both the terms 'microstrip' and 'characteristic impedance', and avoided using them in his papers.
A number of other approximate formulae for the characteristic impedance have been advanced by other authors. However, most of these are applicable to only a limited range of aspect-ratios, or else cover the entire range piecewise.
In particular, the set of equations proposed by Hammerstad, who modifies on Wheeler, are perhaps the most often cited:
where is the effective dielectric constant, approximated as:
Effect of metallic enclosure
Microstrip circuits may require a metallic enclosure, depending upon the application. If the top cover of the enclosure encroaches in the microstrip, the characteristic impedance of the microstrip may be reduced due to the additional path for plate and fringing capacitance. When this happens, equations have been developed to adjust the characteristic impedance in air (εr = 1) of the microstrip, , where , and is the impedance of the uncovered microstrip in air. Equations for may be adjusted to account for the metallic cover and used to compute Zo with dielectric using the expression, , where is the adjusted for the metallic cover. Finite strip thickness compensation may be computed by substituting from above for for both and calculations, using all air calculations and for all dielectric material calculations. In the below expressions, c is the cover height, the distance from the top of the dielectric to the metallic cover.
The equation for is:
.
The equation for is
.
The equation for is
.
The equations are claimed to be accurate to within 1% for:
.
Suspended and inverted microstrip
When the dielectric layer is suspended over the lower ground plane by an air layer, the substrate is known as a suspended substrate, which is analogous to the layer D in the microstrip illustration at the top right of the page being nonzero. The advantages of using a suspended substrate over a traditional microstrip are reduced dispersion effects, increased design frequencies, wider strip geometries, reduced structural inaccuracies, more precise electrical properties, and a higher obtainable characteristic impedance. The disadvantage is that suspended substrates are larger than traditional microstrip substrates, and are more difficult to manufacture. When the conductor is placed below the dielectric layer, as opposed to above, the microstrip is known as an inverted microstrip.
Characteristic impedance
Pramanick and Bhartia documented a series of equations used to approximate the characteristic impedance (Zo) and effective dielectric constant (Ere) for suspended and inverted microstrips. The equations are accessible directly from the reference and are not repeated here.
John Smith worked out equations for the even and odd mode fringe capacitance for arrays of coupled microstrip lines in a suspended substrate using Fourier series expansion of the charge distribution, and provides 1960s style Fortran code that performs the function. Smith's work is detailed in the section below. Single single microstrip lines behave like coupled microstrips with infinitely wide gaps. Therefore, Smith's equations may be used to compute fringe capacitance of single microstrip lines by entering a large number for the gap into the equations such that the other coupled microstrip no longer significantly effects the electrical characteristic of the single microstrip, which is typically a value of seven substrate heights or higher. Inverted microstrips may be computed by swapping the cover height and suspended height variables. Microstrips with no metallic enclosure my be computed by entering a large variable into the metallic cover height such that the metallic cover no longer significantly effects the microstrip electrical characteristics, typically 50 or more times the height of the conductor over the substrate. Inverted microstrips may be computed by swapping the metallic cover height and suspended height variables.
where B, C, and D are defined by the microstrip geometry that is shown in the upper right of the page.
To compute the Zo and Ere values for a suspended or inverted microstrip, the plate capacitance may added to the fringe capacitance for each side of the microstrip line to compute the total capacitance for both the dielectric case (εr) case and air case (εra), and the results may be used to compute Zo and Ere, as shown:
Bends
In order to build a complete circuit in microstrip, it is often necessary for the path of a strip to turn through a large angle. An abrupt 90° bend in a microstrip will cause a significant portion of the signal on the strip to be reflected back towards its source, with only part of the signal transmitted on around the bend. One means of effecting a low-reflection bend, is to curve the path of the strip in an arc of radius at least 3 times the strip-width. However, a far more common technique, and one which consumes a smaller area of substrate, is to use a mitred bend.
To a first approximation, an abrupt un-mitred bend behaves as a shunt capacitance placed between the ground plane and the bend in the strip. Mitring the bend reduces the area of metallization, and so removes the excess capacitance. The percentage mitre is the cut-away fraction of the diagonal between the inner and outer corners of the un-mitred bend.
The optimum mitre for a wide range of microstrip geometries has been determined experimentally by Douville and James. They find that a good fit for the optimum percentage mitre is given by
subject to and with the substrate dielectric constant . This formula is entirely independent of . The actual range of parameters for which Douville and James present evidence is and . They report a VSWR of better than 1.1 (i.e., a return loss better than −26 dB) for any percentage mitre within 4% (of the original ) of that given by the formula. At the minimum of 0.25, the percentage mitre is 98.4%, so that the strip is very nearly cut through.
For both the curved and mitred bends, the electrical length is somewhat shorter than the physical path-length of the strip.
Discontinuous junctions
Other types of microstrip discontinuities besides bends (see above), also referred to as corners, are open ends, via holes (connections to the ground plane), steps in width, gaps between microstrips, tee junctions, and cross junctions. Extensive work has been performed developing models for these types of junctions, and are documented in publicly available literature, such as Quite universal circuit simulator (QUCS).
Coupled microstrips
Microstrip lines may be installed close enough to other microstrip lines such that electrical coupling interactions may exist between the lines. This may come about inadvertently as lines are laid out, or intentionally to shape a desired transfer function, or design a distributed filter. If the two lines are identical in width, they may be characterized by a coupled transmission line even and odd mode analysis.
Characteristic impedance
Closed form expressions for even and odd mode characteristic impedance (Zoe, Zoo) and effective dielectric constant (εree, εreo) have been developed with defined accuracy under stated conditions. They are available from the references and not repeated here.
Fourier series solution
John Smith worked out equations for the even and odd mode fringe capacitance for arrays of coupled microstrip lines with a metallic cover including suspended microstrips using Fourier series expansion of the charge distribution, and provides 1960s style Fortran code that performs the function. Uncovered microstrips are supported by assigning a cover height of generally 50 or more times the conductor height above the ground plane. Inverted microstrips are supported by reversing the cover height and suspended height variables. Smiths equations are advantageous in that they are theoretically valid for all values of conductor width, conductor separation, dielectric constant, cover height, and dielectric suspension height.
Smith's equations contain a bottleneck (equation 37 on page 429) where the inverse of an elliptic integral ratio must be solved, , where is the complete elliptic integral of the first kind, is known, and is the variable that must be solved. Smith provides an elaborate search algorithm that usually converges on a solution for . However, Newton's method or interpolation tables may provide a more rapid and comprehensive solution for .
To compute the even and odd mode Zo and εre values for an uncoupled microstrip, the plate capacitance is added to the even and odd mode fringe capacitance for the inside of the microstrip and the uncoupled fringe capacitance of the outer sides. The uncoupled fringe capacitance may be computed by applying a gap or separation value between the conductors to be infinity wide, which may be approximated by a value of 7 or more times the conductor height above the ground plane. even and odd mode Zo and εre are then computed a functions of even and odd mode capacitance for the dielectric case (εr) case and air case (εr=1) as shown:
.
John Smith's detailed solution
Smith's Fourier series requires the inverse solution, k, to the elliptic integral ratio, , where K() is the complete elliptic integral of the first kind. Although Smith provides an elaborate search algorithm to find k, faster and more accurate convergence may be achieved with Newton's method, or interpolation tables may be employed. Since becomes extremely nonlinear as k approaches 0 and 1, Newton's method works better on the function . Once the value klg is solved for, k is obtained by .
The Newton's method expression to solve for klg is as follows using standard derivative rules. Elliptic integral derivatives may be found on the elliptic integral page.:
An interpolation table to find klg and k is shown below.
For values of , it is useful to apply the relation shown in the table to maximize the linearity of the , or , function for use in Newton's method or interpolation. For example, .
To compute the value of the total even and odd mode capacitance based on Smith's work using elliptic integrals and jacobi elliptic functions. Smith uses the third fast Jacobi elliptic function estimation algorithm found in the elliptic functions page.
To obtain the total capacitance:
where may be approximated by or more times the conductor height above the ground plane.
Example and accuracy comparison
Smith compares the accuracy of his Fourier series capacitance solutions to published tables of the times. However, a more modern approach is to compare the even and odd mode impedance and effective dielectric constants results to those obtains from electromagnetic simulations such as Sonnet. The below example is performed under the following conditions: B = 2.5 mm, C = 0.4 mm, D = 0.6 mm, W = 1.5 mm, G = 0.5 mm, Er = 12, where B, C, and D are defined by the microstrip geometry that is shown in the upper right of the page. The example begins by computing the value of log(k), then k, and goes on to use k, εr, substrate geometry, and conductor geometry to compute the capacitances and subsequently the even and odd mode impedance and effective dielectric constant (Zoe, Zoo, εre and εro).
The Sonnet simulation is performed with a high resolution grid resolution of , reference planes of 7 mm on each side, and simulates the coupled line along a 10 mm length. The Y parameters results are translated to even and odd mode Zo and εr by algebraically inverting the Y parameter equations for coupled transmission lines.
Asymmetrically coupled microstrips
When two microstrip lines exist close enough in proximity for coupling to occur but are not symmetrical in width, even and odd mode analysis is not directly applicable to characterize the lines. In this case, the lines are generally characterized by their self and mutual inductance and capacitance. The defining techniques and expressions are available from the references.
Multiple coupled microstrips
In some cases, multiple microstrip lines may be coupled together. When this happens, each microstrip line will have a self capacitance and a gap capacitance to all of the other lines, including nonadjacent microstrips. Analysis is similar to the asymmetric coupled case above, but the capacitance and inductance matrices will be of size NXN, where N is the number of microstrips coupled together. Nonadjacent microstrip capacitance may be accurately calculated using the Finite element method (FEM).
Losses
Attenuation due to losses from the conductor and dielectric are generally considered when simulating a microstrip. Total losses are a function of microstrip length, so attenuation is generally calculated in units of attenuation per unit length, with total losses calculated by attenuation × length, with attenuation units of Nepers, although some applications may use attenuation in units dB. When the microstrip characteristic impedance (Zo), effective dielectric constant (Ere), and total losses () are all known the microstrip may be modeled as a standard transmission line.
Conductor losses
Conductor losses are defined by the "specific resistance" or "resistivity" of the conductor material, and generally expressed as in the literature. Each conductor material generally has a published resistivity associated with it. For example, the common conductor material of copper has a published resistivity of . E. Hammerstad and Ø. Jensen proposed the following expressions for attenuation due to conductor losses:
and
= sheet resistance of the conductor
= current distribution factor
= correction term due to surface roughness
= vacuum permeability ()
= specific resistance, or resistivity, of the conductor
= effective (rms) surface roughness of the substrate
= skin depth
= wave impedance in vacuum ()
Note that if surface roughness is neglected, the disappears from the expression, and it frequently is.
Some authors use conductor thickness instead of skin depth to compute the sheet resistance, Rs. When this is the case,
where t is conductor thickness.
Dielectric losses
Dielectric losses are defined by the "loss tangent" of the dielectric material, and generally expressed as in the literature. Each dielectric material generally has a published loss tangent associated with it. For example, the common dielectric material is alumina has a published loss tangent of depending on the frequency. Welch and Pratt, and Schneider proposed the following expressions for attenuation due to dielectric losses.:
.
Dielectric losses are in general much less that conductor losses and are frequently neglected in some applications.
Coupled microstrip losses
Coupled microstrip losses may be estimated using the same even and odd mode analysis as is used for characteristic impedance, dielectric constant. and effective dielectric constant for single line microstrips. Coupled line even mode and odd mode each have their independently calculated conductor and dielectric loss values calculated from the corresponding single line Zo and Ere.
Wheeler proposed a conductor loss solution that takes into account the separation between the conductors:
where:
h = height of the conductor over the ground plane
S = separation between the conductors
W = width of the conductors
t = thickness of the conductors.
The partial derivatives with respect to the conductor's separation, thickness, and width may be calculated digitally.
See also
Distributed element filter
Slow-wave coupler
Spurline, a microstrip notch-filter
References
External links
Microstrip in Microwave Encyclopedia
Microstrip Analysis/Synthesis Calculator
Microwave technology
Planar transmission lines
Printed circuit board manufacturing | Microstrip | [
"Engineering"
] | 4,437 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.