id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,278,245 | https://en.wikipedia.org/wiki/Deformation%20quantization | In mathematics and physics, deformation quantization roughly amounts to finding a (quantum) algebra whose classical limit is a given (classical) algebra such as a Lie algebra or a Poisson algebra.
In physics
Intuitively, a deformation of a mathematical object is a family of the same kind of objects that depend on some parameter(s).
Here, it provides rules for how to deform the "classical" commutative algebra of observables to a quantum non-commutative algebra of observables.
The basic setup in deformation theory is to start with an algebraic structure (say a Lie algebra) and ask: Does there exist a one or more parameter(s) family of similar structures, such that for an initial value of the parameter(s) one has the same structure (Lie algebra) one started with? (The oldest illustration of this may be the realization of Eratosthenes in the ancient world that a flat Earth was deformable to a spherical Earth, with deformation parameter 1/R⊕.) E.g., one may define a noncommutative torus as a deformation quantization through a ★-product to implicitly address all convergence subtleties (usually not addressed in formal deformation quantization). Insofar as the algebra of functions on a space determines the geometry of that space, the study of the star product leads to the study of a non-commutative geometry deformation of that space.
In the context of the above flat phase-space example, the star product (Moyal product, actually introduced by Groenewold in 1946), ★ħ, of a pair of functions in , is specified by
where is the Wigner–Weyl transform.
The star product is not commutative in general, but goes over to the ordinary commutative product of functions in the limit of . As such, it is said to define a deformation of the commutative algebra of .
For the Weyl-map example above, the ★-product may be written in terms of the Poisson bracket as
Here, Π is the Poisson bivector, an operator defined such that its powers are
and
where {f1, f2} is the Poisson bracket. More generally,
where is the binomial coefficient.
Thus, e.g., Gaussians compose hyperbolically,
or
etc.
These formulas are predicated on coordinates in which the Poisson bivector is constant (plain flat Poisson brackets). For the general formula on arbitrary Poisson manifolds, cf. the Kontsevich quantization formula.
Antisymmetrization of this ★-product yields the Moyal bracket, the proper quantum deformation of the Poisson bracket, and the phase-space isomorph (Wigner transform) of the quantum commutator in the more usual Hilbert-space formulation of quantum mechanics. As such, it provides the cornerstone of the dynamical equations of observables in this phase-space formulation.
There results a complete phase space formulation of quantum mechanics, completely equivalent to the Hilbert-space operator representation, with star-multiplications paralleling operator multiplications isomorphically.
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables such as the above with the Wigner quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the above Weyl map facilitates recognition of quantum mechanics as a deformation (generalization, cf. correspondence principle) of classical mechanics, with deformation parameter . (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the deformation of Newtonian gravity into General Relativity, with deformation parameter Schwarzschild-radius/characteristic-dimension. Conversely, group contraction leads to the vanishing-parameter undeformed theories—classical limits.)
Classical expressions, observables, and operations (such as Poisson brackets) are modified by -dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.
See also
Deligne's conjecture on Hochschild cohomology
Poisson manifold
Formality theorem; for now, see https://mathoverflow.net/questions/32889/a-few-questions-about-kontsevich-formality
Kontsevich quantization formula
References
Further reading
https://ncatlab.org/nlab/show/deformation+quantization
https://ncatlab.org/nlab/show/formal+deformation+quantization
Mathematical quantization
Mathematical physics | Deformation quantization | [
"Physics",
"Mathematics"
] | 1,006 | [
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Mathematical quantization",
"Mathematical physics"
] |
4,279,403 | https://en.wikipedia.org/wiki/Correlation%20immunity | In mathematics, the correlation immunity of a Boolean function is a measure of the degree to which its outputs are uncorrelated with some subset of its inputs. Specifically, a Boolean function is said to be correlation-immune of order m if every subset of m or fewer variables in is statistically independent of the value of .
Definition
A function is -th order correlation immune if for any independent binary random variables , the random variable is independent from any random vector with .
Results in cryptography
When used in a stream cipher as a combining function for linear feedback shift registers, a Boolean function with low-order correlation-immunity is more susceptible to a correlation attack than a function with correlation immunity of high order.
Siegenthaler showed that the correlation immunity m of a Boolean function of algebraic degree d of n variables satisfies m + d ≤ n; for a given set of input variables, this means that a high algebraic degree will restrict the maximum possible correlation immunity. Furthermore, if the function is balanced then m + d ≤ n − 1.
References
Further reading
Cusick, Thomas W. & Stanica, Pantelimon (2009). "Cryptographic Boolean functions and applications". Academic Press. .
Cryptography
Boolean algebra | Correlation immunity | [
"Mathematics",
"Engineering"
] | 256 | [
"Boolean algebra",
"Cybersecurity engineering",
"Cryptography",
"Applied mathematics",
"Mathematical logic",
"Fields of abstract algebra"
] |
4,279,406 | https://en.wikipedia.org/wiki/Mixotricha%20paradoxa | Mixotricha paradoxa is a species of protozoan that lives inside the gut of the Australian termite species Mastotermes darwiniensis.
It is composed of five different organisms: three bacterial ectosymbionts live on its surface for locomotion and at least one endosymbiont lives inside to help digest cellulose in wood to produce acetate for its host(s).
Mixotricha mitochondria degenerated in hydrogenosomes and mitosomes and lost the ability to produce energy aerobically by oxidative phosphorylation. The mitochondria-derived nuclear genes were however conserved.
Discovery
The name was given by the Australian biologist J.L. Sutherland, who first described Mixotricha in 1933. The name means "the paradoxical being with mixed-up hairs" because this protist has both cilia and flagella, which was not thought to be possible for protists.
Behavior
Mixotricha is a species of protozoan that lives inside the gut of the Australian termite species Mastotermes darwiniensis and has multiple bacterial symbionts.
Mixotricha is a large protozoan long and contains hundreds of thousands of bacteria. It is an endosymbiont and digests cellulose for the termite.
Trichomonads like Mixotricha reproduce by a special form of longitudinal fission, leading to large numbers of trophozoites in a relatively short time. Cysts never form, so transmission from one host to another is always based on direct contact between the sites they occupy.
Anatomy
Species of the order Trichomonadida typically have four to six flagella at the cell's apical pole, one of which is recurrent - that is, it runs along a surface wave, giving the aspect of an undulating membrane. Mixotricha paradoxa have four weak flagella that serve as rudders.
It has four large flagella at the front end, three pointing forwards and one backward.
The basal bodies are also bacteria, not spirochaetes but oval, pill-shaped bacteria. There is a one-to-one relationship between a bracket, a spirochaete, and a basal bacterium. Each bracket has one spirochaete running through it and one pill bacterium at its base as the basal body. It has not been shown definitely, but the basal bodies could also be making cellulases that digest wood.
Endosymbionts for biochemical processes
At least one endosymbiont lives inside the protist to help digest cellulose and lignin, a major component of the wood the termites eat. The cellulose gets converted to glucose then to acetate, and the lignin is digested directly to acetate. The acetate probably crosses the termite gut membrane to be digested later.
Mixotricha forms a mutualistic relationship with bacteria living inside the termite. There are a total of four species of bacterial symbionts. It has spherical bacteria inside the cell, which function as mitochondria, which Mixotricha lacks. Mixotricha mitochondria degenerated and lost the ability to produce energy aerobically by oxidative phosphorylation. Mitochondrial relics include hydrogenosomes which produce hydrogen and small structures called mitosomes.
Ectosymbionts for movement
Three surface colonising bacteria are anchored on the surface.
The flagella and cilia are actually two different single celled organisms. The ciliate belongs to an archaic group that used to be called archezoa but this term is no longer in fashion. It has four weak flagella, which serve as a rudder.
While Mixotricha has four anterior flagella, it does not use them for locomotion, but more for steering. For locomotion, about 250,000 hairlike Treponema spirochaetes, a species of helical bacteria, are attached to the cell surface and provide the cell with cilia-like movements.
The wavelength of the cilia is about and suggests that the spirochaetes are somehow in touch with each other.
Mixotricha also has rod-shaped bacteria arranged in an ordered pattern on the surface of the cell.
Each spirochaete has its own little emplacement, called a 'bracket'. Spirochetes move continuously forwards or backwards but when they are attached they move in one direction.
Sperm tails might have their origin in spirochaetes. The evidence that cilia (undulipodia) are symbiotic bacteria is found unpersuasive.
Genome
Mixotricha have five genomes, as they form very close symbiotic relationships with four types of bacteria. It is a good example organism for symbiogenesis and nestedness.
There are two spirochete and one-rod bacteria on its surface, one endosymbiotic bacteria inside to digest cellulose and the host nucleus.
References
Metamonads
Symbiosis
Endosymbiotic events
Protists described in 1933
Metamonad species | Mixotricha paradoxa | [
"Biology"
] | 1,064 | [
"Biological interactions",
"Endosymbiotic events",
"Symbiosis",
"Behavior"
] |
628,183 | https://en.wikipedia.org/wiki/Goldstone%20boson | In particle and condensed matter physics, Goldstone bosons or Nambu–Goldstone bosons (NGBs) are bosons that appear necessarily in models exhibiting spontaneous breakdown of continuous symmetries. They were discovered by Yoichiro Nambu in particle physics within the context of the BCS superconductivity mechanism, and subsequently elucidated by Jeffrey Goldstone, and systematically generalized in the context of quantum field theory. In condensed matter physics such bosons are quasiparticles and are known as Anderson–Bogoliubov modes.
These spinless bosons correspond to the spontaneously broken internal symmetry generators, and are characterized by the quantum numbers of these.
They transform nonlinearly (shift) under the action of these generators, and can thus be excited out of the asymmetric vacuum by these generators. Thus, they can be thought of as the excitations of the field in the broken symmetry directions in group space—and are massless if the spontaneously broken symmetry is not also broken explicitly.
If, instead, the symmetry is not exact, i.e. if it is explicitly broken as well as spontaneously broken, then the Nambu–Goldstone bosons are not massless, though they typically remain relatively light; they are then called pseudo-Goldstone bosons or pseudo–Nambu–Goldstone bosons (abbreviated PNGBs).
Goldstone's theorem
Goldstone's theorem examines a generic continuous symmetry which is spontaneously broken; i.e., its currents are conserved, but the ground state is not invariant under the action of the corresponding charges. Then, necessarily, new massless (or light, if the symmetry is not exact) scalar particles appear in the spectrum of possible excitations. There is one scalar particle—called a Nambu–Goldstone boson—for each generator of the symmetry that is broken, i.e., that does not preserve the ground state. The Nambu–Goldstone mode is a long-wavelength fluctuation of the corresponding order parameter.
By virtue of their special properties in coupling to the vacuum of the respective symmetry-broken theory, vanishing momentum ("soft") Goldstone bosons involved in field-theoretic amplitudes make such amplitudes vanish ("Adler zeros").
Examples
Natural
In fluids, the phonon is longitudinal and it is the Goldstone boson of the spontaneously broken Galilean symmetry. In solids, the situation is more complicated; the Goldstone bosons are the longitudinal and transverse phonons and they happen to be the Goldstone bosons of spontaneously broken Galilean, translational, and rotational symmetry with no simple one-to-one correspondence between the Goldstone modes and the broken symmetries.
In magnets, the original rotational symmetry (present in the absence of an external magnetic field) is spontaneously broken such that the magnetization points in a specific direction. The Goldstone bosons then are the magnons, i.e., spin waves in which the local magnetization direction oscillates.
The pions are the pseudo-Goldstone bosons that result from the spontaneous breakdown of the chiral-flavor symmetries of QCD effected by quark condensation due to the strong interaction. These symmetries are further explicitly broken by the masses of the quarks so that the pions are not massless, but their mass is significantly smaller than typical hadron masses.
The longitudinal polarization components of the W and Z bosons correspond to the Goldstone bosons of the spontaneously broken part of the electroweak symmetry SU(2)⊗U(1), which, however, are not observable. Because this symmetry is gauged, the three would-be Goldstone bosons are absorbed by the three gauge bosons corresponding to the three broken generators; this gives these three gauge bosons a mass and the associated necessary third polarization degree of freedom. This is described in the Standard Model through the Higgs mechanism. An analogous phenomenon occurs in superconductivity, which served as the original source of inspiration for Nambu, namely, the photon develops a dynamical mass (expressed as magnetic flux exclusion from a superconductor), cf. the Ginzburg–Landau theory.
Primordial fluctuations during inflation can be viewed as Goldstone bosons arising due to the spontaneous symmetry breaking of time translation symmetry of a de Sitter universe. These fluctuations in the inflaton scalar field subsequently seed cosmic structure formation.
Ricciardi and Umezawa proposed in 1967 a general theory (quantum brain) about the possible brain mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. This theory was subsequently extended in 1995 by Giuseppe Vitiello taking into account that the brain is an "open" system (the dissipative quantum model of the brain). Applications of spontaneous symmetry breaking and of Goldstone's theorem to biological systems, in general, have been published by E. Del Giudice, S. Doglia, M. Milani, and G. Vitiello, and by E. Del Giudice, G. Preparata and G. Vitiello. Mari Jibu and Kunio Yasue and Giuseppe Vitiello, based on these findings, discussed the implications for consciousness.
Theory
Consider a complex scalar field , with the constraint that , a constant. One way to impose a constraint of this sort is by including a potential interaction term in its Lagrangian density,
and taking the limit as . This is called the "Abelian nonlinear σ-model".
The constraint, and the action, below, are invariant under a U(1) phase transformation, . The field can be redefined to give a real scalar field (i.e., a spin-zero particle) without any constraint by
where is the Nambu–Goldstone boson (actually is) and the U(1) symmetry transformation effects a shift on , namely
but does not preserve the ground state (i.e. the above infinitesimal transformation does not annihilate it—the hallmark of invariance), as evident in the charge of the current below.
Thus, the vacuum is degenerate and noninvariant under the action of the spontaneously broken symmetry.
The corresponding Lagrangian density is given by
and thus
Note that the constant term in the Lagrangian density has no physical significance, and the other term in it is simply the kinetic term for a massless scalar.
The symmetry-induced conserved U(1) current is
The charge, Q, resulting from this current shifts and the ground state to a new, degenerate, ground state. Thus, a vacuum with will shift to a different vacuum with . The current connects the original vacuum with the Nambu–Goldstone boson state, .
In general, in a theory with several scalar fields, , the Nambu–Goldstone mode is massless, and parameterises the curve of possible (degenerate) vacuum states. Its hallmark under the broken symmetry transformation is nonvanishing vacuum expectation , an order parameter, for vanishing , at some ground state |0〉 chosen at the minimum of the potential, . In principle the vacuum should be the minimum of the effective potential which takes into account quantum effects, however it is equal to the classical potential to first approximation. Symmetry dictates that all variations of the potential with respect to the fields in all symmetry directions vanish. The vacuum value of the first order variation in any direction vanishes as just seen; while the vacuum value of the second order variation must also vanish, as follows. Vanishing vacuum values of field symmetry transformation increments add no new information.
By contrast, however, nonvanishing vacuum expectations of transformation increments, , specify the relevant (Goldstone) null eigenvectors of the mass matrix,
and hence the corresponding zero-mass eigenvalues.
Goldstone's argument
The principle behind Goldstone's argument is that the ground state is not unique. Normally, by current conservation, the charge operator for any symmetry current is time-independent,
Acting with the charge operator on the vacuum either annihilates the vacuum, if that is symmetric; else, if not, as is the case in spontaneous symmetry breaking, it produces a zero-frequency state out of it, through its shift transformation feature illustrated above. Actually, here, the charge itself is ill-defined, cf. the Fabri–Picasso argument below.
But its better behaved commutators with fields, that is, the nonvanishing transformation shifts , are, nevertheless, time-invariant,
thus generating a in its Fourier transform. (This ensures that, inserting a complete set of intermediate states in a nonvanishing current commutator can lead to vanishing time-evolution only when one or more of these states is massless.)
Thus, if the vacuum is not invariant under the symmetry, action of the charge operator produces a state which is different from the vacuum chosen, but which has zero frequency. This is a long-wavelength oscillation of a field which is nearly stationary: there are physical states with zero frequency, , so that the theory cannot have a mass gap.
This argument is further clarified by taking the limit carefully. If an approximate charge operator acting in a huge but finite region is applied to the vacuum,
a state with approximately vanishing time derivative is produced,
Assuming a nonvanishing mass gap , the frequency of any state like the above, which is orthogonal to the vacuum, is at least ,
Letting become large leads to a contradiction. Consequently 0 = 0. However this argument fails when the symmetry is gauged, because then the symmetry generator is only performing a gauge transformation. A gauge transformed state is the same exact state, so that acting with a symmetry generator does not get one out of the vacuum (see Higgs mechanism).
Fabri–Picasso Theorem. does not properly exist in the Hilbert space, unless .
The argument requires both the vacuum and the charge to be translationally invariant, , .
Consider the correlation function of the charge with itself,
so the integrand in the right hand side does not depend on the position.
Thus, its value is proportional to the total space volume, — unless the symmetry is unbroken, . Consequently, does not properly exist in the Hilbert space.
Infraparticles
There is an arguable loophole in the theorem. If one reads the theorem carefully, it only states that there exist non-vacuum states with arbitrarily small energies. Take for example a chiral N = 1 super QCD model with a nonzero squark VEV which is conformal in the IR. The chiral symmetry is a global symmetry which is (partially) spontaneously broken. Some of the "Goldstone bosons" associated with this spontaneous symmetry breaking are charged under the unbroken gauge group and hence, these composite bosons have a continuous mass spectrum with arbitrarily small masses but yet there is no Goldstone boson with exactly zero mass. In other words, the Goldstone bosons are infraparticles.
Extensions
Nonrelativistic theories
A version of Goldstone's theorem also applies to nonrelativistic theories. It essentially states that, for each spontaneously broken symmetry, there corresponds some quasiparticle which is typically a boson and has no energy gap. In condensed matter these goldstone bosons are also called gapless modes (i.e. states where the energy dispersion relation is like and is zero for ), the nonrelativistic version of the massless particles (i.e. photons where the dispersion relation is also and zero for ). Note that the energy in the non relativistic condensed matter case is and not as it would be in a relativistic case. However, two different spontaneously broken generators may now give rise to the same Nambu–Goldstone boson.
As a first example an antiferromagnet has 2 goldstone bosons, a ferromagnet has 1 goldstone bosons, where in both cases we are breaking symmetry from SO(3) to SO(2), for the antiferromagnet the dispersion is and the expectation value of the ground state is zero, for the ferromagnet instead the dispersion is and the expectation value of the ground state is not zero, i.e. there is a spontaneously broken symmetry for the ground state
As a second example, in a superfluid, both the U(1) particle number symmetry and Galilean symmetry are spontaneously broken. However, the phonon is the Goldstone boson for both.
Still in regards to symmetry breaking there is also a close analogy between gapless modes in condensed matter and the Higgs boson, e.g. in the paramagnet to ferromagnet phase transition
Breaking of spacetime symmetries
In contrast to the case of the breaking of internal symmetries, when spacetime symmetries such as Lorentz, conformal, rotational, or translational symmetries are broken, the order parameter need not be a scalar field, but may be a tensor field, and the number of independent massless modes may be fewer than the number of spontaneously broken generators. For a theory with an order parameter that spontaneously breaks a spacetime symmetry, the number of broken generators minus the number non-trivial independent solutions to
is the number of Goldstone modes that arise. For internal symmetries, the above equation has no non-trivial solutions, so the usual Goldstone theorem holds. When solutions do exist, this is because the Goldstone modes are linearly dependent among themselves, in that the resulting mode can be expressed as a gradients of another mode. Since the spacetime dependence of the solutions is in the direction of the unbroken generators, when all translation generators are broken, no non-trivial solutions exist and the number of Goldstone modes is once again exactly the number of broken generators.
In general, the phonon is effectively the Nambu–Goldstone boson for spontaneously broken translation symmetry.
Nambu–Goldstone fermions
Spontaneously broken global fermionic symmetries, which occur in some supersymmetric models, lead to Nambu–Goldstone fermions, or goldstinos. These have spin , instead of 0, and carry all quantum numbers of the respective supersymmetry generators broken spontaneously.
Spontaneous supersymmetry breaking smashes up ("reduces") supermultiplet structures into the characteristic nonlinear realizations of broken supersymmetry, so that goldstinos are superpartners of all particles in the theory, of any spin, and the only superpartners, at that. That is, to say, two non-goldstino particles are connected to only goldstinos through supersymmetry transformations, and not to each other, even if they were so connected before the breaking of supersymmetry. As a result, the masses and spin multiplicities of such particles are then arbitrary.
See also
Pseudo-Goldstone boson
Majoron
Higgs mechanism
Mermin–Wagner theorem
Vacuum expectation value
Noether's theorem
Notes
References
Bosons
Quantum field theory
Mathematical physics
Physics theorems
Subatomic particles with spin 0 | Goldstone boson | [
"Physics",
"Mathematics"
] | 3,155 | [
"Quantum field theory",
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Bosons",
"Subatomic particles",
"Mathematical physics",
"Matter",
"Physics theorems"
] |
628,198 | https://en.wikipedia.org/wiki/Majoron | In particle physics, majorons (named after Ettore Majorana) are a hypothetical type of Goldstone boson that are conjectured to mediate the neutrino mass violation of lepton number or B − L in certain high energy collisions such as
+ → + +
Where two electrons collide to form two W bosons and the majoron J. The U(1)B–L symmetry is assumed to be global so that the majoron is not "eaten up" by the gauge boson and spontaneously broken. Majorons were originally formulated in four dimensions by Yuichi Chikashige, Rabindra Mohapatra and Roberto Peccei to understand neutrino masses by the seesaw mechanism and are being searched for in the neutrino-less double beta decay process. The name majoron was suggested by Graciela Gelmini as a derivative of the last name Majorana with the suffix -on typical of particle names like electron, proton, neutron, etc. There are theoretical extensions of this idea into supersymmetric theories and theories involving extra compactified dimensions. By propagating through the extra spatial dimensions the detectable number of majoron creation events vary accordingly. Mathematically, majorons may be modeled by allowing them to propagate through a material while all other Standard Model forces are fixed to an orbifold point.
Searches
Experiments studying double beta decay have set limits on decay modes that emit majorons.
NEMO has observed a variety of elements. EXO and Kamland-Zen have set half-life limits for majoron decays in xenon.
See also
List of hypothetical particles
References
Further reading
Bosons
Hypothetical elementary particles
Subatomic particles with spin 0 | Majoron | [
"Physics"
] | 348 | [
"Matter",
"Unsolved problems in physics",
"Bosons",
"Particle physics",
"Particle physics stubs",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Subatomic particles"
] |
630,017 | https://en.wikipedia.org/wiki/Feynman%E2%80%93Kac%20formula | The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations and stochastic processes. In 1947, when Kac and Feynman were both faculty members at Cornell University, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real-valued case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question.
It offers a method of solving certain partial differential equations by simulating random paths of a stochastic process. Conversely, an important class of expectations of random processes can be computed by deterministic methods.
Theorem
Consider the partial differential equation
defined for all and , subject to the terminal condition
where are known functions, is a parameter, and is the unknown. Then the Feynman–Kac formula expresses as a conditional expectation under the probability measure
where is an Itô process satisfying
and a Wiener process (also called Brownian motion) under .
Intuitive interpretation
Suppose that the position of a particle evolves according to the diffusion process
Let the particle incur "cost" at a rate of at location at time . Let it incur a final cost at .
Also, allow the particle to decay. If the particle is at location at time , then it decays with rate . After the particle has decayed, all future cost is zero.
Then is the expected cost-to-go, if the particle starts at
Partial proof
A proof that the above formula is a solution of the differential equation is long, difficult and not presented here. It is however reasonably straightforward to show that, if a solution exists, it must have the above form. The proof of that lesser result is as follows:
Let be the solution to the above partial differential equation. Applying the product rule for Itô processes to the process
one gets:
Since
the third term is and can be dropped. We also have that
Applying Itô's lemma to , it follows that
The first term contains, in parentheses, the above partial differential equation and is therefore zero. What remains is:
Integrating this equation from to , one concludes that:
Upon taking expectations, conditioned on , and observing that the right side is an Itô integral, which has expectation zero, it follows that:
The desired result is obtained by observing that:
and finally
Remarks
The proof above that a solution must have the given form is essentially that of with modifications to account for .
The expectation formula above is also valid for N-dimensional Itô diffusions. The corresponding partial differential equation for becomes: where, i.e. , where denotes the transpose of .
More succinctly, letting be the infinitesimal generator of the diffusion process,
This expectation can then be approximated using Monte Carlo or quasi-Monte Carlo methods.
When originally published by Kac in 1949, the Feynman–Kac formula was presented as a formula for determining the distribution of certain Wiener functionals. Suppose we wish to find the expected value of the function in the case where x(τ) is some realization of a diffusion process starting at . The Feynman–Kac formula says that this expectation is equivalent to the integral of a solution to a diffusion equation. Specifically, under the conditions that , where and
The Feynman–Kac formula can also be interpreted as a method for evaluating functional integrals of a certain form. If
where the integral is taken over all random walks, then where is a solution to the parabolic partial differential equation with initial condition .
Applications
Finance
In quantitative finance, the Feynman–Kac formula is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks and zero-coupon bond prices in affine term structure models.
For example, consider a stock price undergoing geometric Brownian motion
where is the risk-free interest rate and is the volatility. Equivalently, by Itô's lemma,
Now consider a European call option on an expiring at time with strike . At expiry, it is worth Then, the risk-neutral price of the option, at time and stock price , is
Plugging into the Feynman–Kac formula, we obtain the Black–Scholes equation:
where
More generally, consider an option expiring at time with payoff . The same calculation shows that its price satisfies
Some other options like the American option do not have a fixed expiry. Some options have value at expiry determined by the past stock prices. For example, an average option has a payoff that is not determined by the underlying price at expiry but by the average underlying price over some predetermined period of time. For these, the Feynman–Kac formula does not directly apply.
Quantum mechanics
In quantum chemistry, it is used to solve the Schrödinger equation with the pure diffusion Monte Carlo method.
See also
Itô's lemma
Kunita–Watanabe inequality
Girsanov theorem
Kolmogorov backward equation
Kolmogorov forward equation (also known as Fokker–Planck equation)
Stochastic mechanics
References
Further reading
Richard Feynman
Stochastic processes
Parabolic partial differential equations
Articles containing proofs
Mathematical finance | Feynman–Kac formula | [
"Mathematics"
] | 1,099 | [
"Applied mathematics",
"Mathematical finance",
"Articles containing proofs"
] |
630,099 | https://en.wikipedia.org/wiki/Beam%20%28structure%29 | A beam is a structural element that primarily resists loads applied laterally across the beam's axis (an element designed to carry a load pushing parallel to its axis would be a strut or column). Its mode of deflection is primarily by bending, as loads produce reaction forces at the beam's support points and internal bending moments, shear, stresses, strains, and deflections. Beams are characterized by their manner of support, profile (shape of cross-section), equilibrium conditions, length, and material.
Beams are traditionally descriptions of building or civil engineering structural elements, where the beams are horizontal and carry vertical loads. However, any structure may contain beams, such as automobile frames, aircraft components, machine frames, and other mechanical or structural systems. Any structural element, in any orientation, that primarily resists loads applied laterally across the element's axis is a beam.
Overview
Historically a beam is a squared timber, but may also be made of metal, stone, or a combination of wood and metal such as a flitch beam. Beams primarily carry vertical gravitational forces, but they are also used to carry horizontal loads such as those due to earthquake or wind, or in tension to resist rafter thrust (tie beam) or compression (collar beam). The loads carried by a beam are transferred to columns, walls, or girders, then to adjacent structural compression members, and eventually to the ground. In light frame construction, joists may rest on beams.
Classification based on supports
In engineering, beams are of several types:
Simply supported – a beam supported on the ends which are free to rotate and have no moment resistance.
Fixed or encastré (encastrated) – a beam supported on both ends and restrained from rotation.
Overhanging – a simple beam extending beyond its support on one end.
Double overhanging – a simple beam with both ends extending beyond its supports on both ends.
Continuous – a beam extending over more than two supports.
Cantilever – a projecting beam fixed only at one end.
Trussed – a beam strengthened by adding a cable or rod to form a truss.
Beam on spring supports
Beam on elastic foundation
Second moment of area (area moment of inertia)
In the beam equation, the variable I represents the second moment of area or moment of inertia: it is the sum, along the axis, of dA·r2, where r is the distance from the neutral axis and dA is a small patch of area. It measures not only the total area of the beam section, but the square of each patch's distance from the axis. A larger value of I indicates a stiffer beam, more resistant to bending.
Stress
Loads on a beam induce internal compressive, tensile and shear stresses (assuming no torsion or axial loading). Typically, under gravity loads, the beam bends into a slightly circular arc, with its original length compressed at the top to form an arc of smaller radius, while correspondingly stretched at the bottom to enclose an arc of larger radius in tension. This is known as sagging; while a configuration with the top in tension, for example over a support, is known as hogging. The axis of the beam retaining its original length, generally halfway between the top and bottom, is under neither compression nor tension, and defines the neutral axis (dotted line in the beam figure).
Above the supports, the beam is exposed to shear stress. There are some reinforced concrete beams in which the concrete is entirely in compression with tensile forces taken by steel tendons. These beams are known as prestressed concrete beams, and are fabricated to produce a compression more than the expected tension under loading conditions. High strength steel tendons are stretched while the beam is cast over them. Then, when the concrete has cured, the tendons are slowly released and the beam is immediately under eccentric axial loads. This eccentric loading creates an internal moment, and, in turn, increases the moment-carrying capacity of the beam. Prestressed beams are commonly used on highway bridges.
The primary tool for structural analysis of beams is the Euler–Bernoulli beam equation. This equation accurately describes the elastic behaviour of slender beams where the cross sectional dimensions are small compared to the length of the beam. For beams that are not slender a different theory needs to be adopted to account for the deformation due to shear forces and, in dynamic cases, the rotary inertia. The beam formulation adopted here is that of Timoshenko and comparative examples can be found in NAFEMS Benchmark Challenge Number 7. Other mathematical methods for determining the deflection of beams include "method of virtual work" and the "slope deflection method". Engineers are interested in determining deflections because the beam may be in direct contact with a brittle material such as glass. Beam deflections are also minimized for aesthetic reasons. A visibly sagging beam, even if structurally safe, is unsightly and to be avoided. A stiffer beam (high modulus of elasticity and/or one of higher second moment of area) creates less deflection.
Mathematical methods for determining the beam forces (internal forces of the beam and the forces that are imposed on the beam support) include the "moment distribution method", the force or flexibility method and the direct stiffness method.
General shapes
Most beams in reinforced concrete buildings have rectangular cross sections, but a more efficient cross section for a beam is an - or H-shaped section which is typically seen in steel construction. Because of the parallel axis theorem and the fact that most of the material is away from the neutral axis, the second moment of area of the beam increases, which in turn increases the stiffness.
An -beam is only the most efficient shape in one direction of bending: up and down looking at the profile as an ''. If the beam is bent side to side, it functions as an 'H', where it is less efficient. The most efficient shape for both directions in 2D is a box (a square shell); the most efficient shape for bending in any direction, however, is a cylindrical shell or tube. For unidirectional bending, the -beam or wide flange beam is superior.
Efficiency means that for the same cross sectional area (volume of beam per length) subjected to the same loading conditions, the beam deflects less.
Other shapes, like L-beam (angles), C (channels), T-beam and double-T or tubes, are also used in construction when there are special requirements.
Walers and struts
This system provides horizontal bracing for small trenches, ensuring the secure installation of utilities. It's specifically designed to work in conjunction with steel trench sheets.
Thin walled
A thin walled beam is a very useful type of beam (structure). The cross section of thin walled beams is made up from thin panels connected among themselves to create closed or open cross sections of a beam (structure). Typical closed sections include round, square, and rectangular tubes. Open sections include I-beams, T-beams, L-beams, and so on. Thin walled beams exist because their bending stiffness per unit cross sectional area is much higher than that for solid cross sections such a rod or bar. In this way, stiff beams can be achieved with minimum weight. Thin walled beams are particularly useful when the material is a composite laminate. Pioneer work on composite laminate thin walled beams was done by Librescu.
The torsional stiffness of a beam is greatly influenced by its cross sectional shape. For open sections, such as I sections, warping deflections occur which, if restrained, greatly increase the torsional stiffness.
See also
Airy points
Beam engine
Building code
Cantilever
Classical mechanics
Deflection (engineering)
Elasticity (physics) and Plasticity (physics)
Euler–Bernoulli beam theory
Finite element method in structural mechanics
Flexural modulus
Free body diagram
Influence line
Materials science and Strength of materials
Moment (physics)
Poisson's ratio
Post and lintel
Shear strength
Statics and Statically indeterminate
Stress (mechanics) and Strain (materials science)
Thin-shell structure
Timber framing
Truss
Ultimate tensile strength and Hooke's law
Yield (engineering)
References
Further reading
External links
American Wood Council: Free Download Library Wood Construction Data
Introduction to Structural Design , U. Virginia Dept. Architecture
Glossary
Course Sampler Lectures, Projects, Tests
Beams and Bending review points (follow using next buttons)
Structural Behavior and Design Approaches lectures (follow using next buttons)
U. Wisconsin–Stout, Strength of Materials online lectures, problems, tests/solutions, links, software
Beams I – Shear Forces and Bending Moments
Bridge components
Solid mechanics
Statics
Structural system | Beam (structure) | [
"Physics",
"Technology",
"Engineering"
] | 1,780 | [
"Structural engineering",
"Solid mechanics",
"Statics",
"Building engineering",
"Classical mechanics",
"Structural system",
"Mechanics",
"Bridge components",
"Components"
] |
630,142 | https://en.wikipedia.org/wiki/Transdermal%20implant | Transdermal implants, or dermal piercings, are a form of body modification used both in a medical and aesthetic context that, in contrast to subdermal implants, consist of an object placed partially below and partially above the skin, thus implanted transdermal. Two techniques are prevalent using post-like and microdermal implants respectively.
Although the skin around such implants generally heals as if it were a piercing, in the body piercing community, these types of modification are commonly called fairly "heavy" due to the complexity of the procedure, but the potential social implications either.
Procedure
When the procedure is done using a post-like implant, an incision is made a small distance from the site. The skin is then lifted and the implant is passed through. Then, a hole is opened at the site for it to pass through, and it is moved so that the top part fills the hole. The implants used for this are generally small and not textured in any way except rounding.
If a more graphic implant is desired, it is generally done in two parts. First, the base is inserted the same way a single-part would be, except that the base implant is threaded. It may either stick out like a bolt, or be inward like a nut. When this is done, the top half is screwed on. This type is usually done for spikes and/or horns.
In any case, the part of the implant which passes under the skin generally is somewhat large and has holes. The skin will grow into them, making it more permanent.
Microdermal implants
Microdermal implants are a form of body modification which gives the aesthetic appearance of a transdermal implant, without the complications of the much more complicated surgery associated with transdermal implants. Microdermals are single point piercings which are a sort of surface piercing.
Microdermal implants can be placed practically anywhere on the surface of the skin on the body, but are different from conventional piercings in that they are composed of two components: an anchor, which is implanted underneath the skin, with a step protruding from (or flush with) the surface of the surrounding skin, and the changeable jewellery, which is screwed into the threaded hole in the step of the anchor.
They should not be implanted in hands, feet, wrists, collarbones or any area where it is not flat or that is near a joint.
Procedure
The procedure is usually performed using a dermal punch or needle. When a dermal piercing is done with a punch, the pouch is made in a different way. When using a needle, the pouch is made by separating the skin. When using a dermal punch, the pouch is made by removing a bit of tissue. A microdermal punch is less painful and therefore commonly used. The process starts by identifying the point of piercing on the sterilized area that will be marked with a surgical marker. The microdermal punch is then used to remove skin tissues. The anchor is then placed under the skin and a piece of jewelry is placed using surgical forceps.
See also
Body modification
Body piercing
Body piercing materials
Subdermal implant
References
Body piercing jewellery
Implants (medicine)
Drug delivery devices
Dosage forms | Transdermal implant | [
"Chemistry"
] | 662 | [
"Pharmacology",
"Drug delivery devices"
] |
630,611 | https://en.wikipedia.org/wiki/Suicide%20gene | In the field of genetics, a suicide gene is a gene that will cause a cell to kill itself through the process of apoptosis (programmed cell death). Activation of a suicide gene can cause death through a variety of pathways, but one important cellular "switch" to induce apoptosis is the p53 protein. Stimulation or introduction (through gene therapy) of suicide genes is a potential way of treating cancer or other proliferative diseases.
Suicide genes form the basis of a strategy for making cancer cells more vulnerable or sensitive to chemotherapy. The approach has been to attach parts of genes expressed in cancer cells to other genes for enzymes not found in mammals that can convert a harmless substance into one that is toxic to the tumor. Most suicide genes mediate this sensitivity by coding for viral or bacterial enzymes that convert an inactive drug into toxic antimetabolites that inhibit the synthesis of nucleic acid. Suicide genes must be introduced into the cells in ways that ensure their uptake and expression by as many cancer cells as possible, while limiting their expression by normal cells. Suicide gene therapy for cancer requires the vector to have the capacity to discriminate between target and non target cells, between the cancer cells and normal cells.
Apoptosis
Cell death can majorly occur by either necrosis or apoptosis. Necrosis occurs when a cell is damaged by an external force, such as poison, a bodily injury, an infection or getting cut off from blood supply. When cells die from necrosis, it's a rather messy affair. The death causes inflammation that can cause further distress of injury within the body. Whereas, apoptosis causes degradation of cellular components without eliciting an inflammatory response.
Many cells undergo programmed cell death, or apoptosis, during fetal development. A form of cell death in which a programmed sequence of events leads to the elimination of cells without releasing harmful substances into the surrounding. Apoptosis plays a crucial role in developing and maintaining the health of the body by eliminating old cells, unnecessary cells, and unhealthy cells. The human body replaces perhaps one million cells per second. When a cell is compelled to commit suicide, proteins called caspases go into action. They break down the cellular components needed for survival, and they spur production of enzymes known as DNase, which destroy the DNA in the nucleus of the cell. The cell shrinks and sends out distress signals, which are answered by macrophages. The macrophages clean away the shrunken cells, leaving no trace, so these cells do not damage surrounding necrotic cells do. Apoptosis is also essential to prenatal development. For example, in embryos, fingers and toes are initially connected to adjacent digits by tissue. The cells of this connecting tissue undergo apoptosis to produce separate digits. In brain development, initially millions of extra neurons are created. The cells that don't form synaptic connections undergo apoptosis. Programmed cell death is also necessary to start the process of menstruation. That's not to say that apoptosis is a perfect process. Rather than dying due to injury, cells that go through apoptosis die in response to signals within the body. When cells recognize viruses and gene mutations, they may induce death to prevent the damage from spreading. Scientist are trying to learn how they can modulate apoptosis, so that they can control which cells live and which undergo programmed cell death. Anti-cancer drugs and radiation, for example, work by triggering apoptosis in diseased cells. Many diseases and disorders are linked with the life and death of cells—increased apoptosis is a characteristic of AIDS, Alzheimer's, and Parkinson's disease, while decreased apoptosis can signal lupus or cancer. Understanding how to regulate apoptosis could be the first step to treating these conditions.
Too little or too much apoptosis can play a role in many diseases. When apoptosis does not work correctly, cells that should be eliminated may persist and become immortal, for example, in cancer and leukemia. when apoptosis works overly well, it kills too many cells and inflicts grave tissue damage. This is the case in strokes and neurodegenerative disorders such as Alzheimer's, Huntington's, and Parkinson's disease. Also known as programmed cell death and cell suicide.
Applications
Cancer suicide gene therapy
The ultimate goal of cancer therapy is the complete elimination of all cancer cells, while leaving all healthy cells unharmed. One of the most promising therapeutic strategies in this regard is cancer suicide gene therapy (CSGT), which is rapidly progressing into new frontiers. The therapeutic success, in CSGT, is primarily contingent upon precision in delivery of the therapeutic transgenes to the cancer cells only. This is addressed by discovering and targeting unique or / and over-expressed biomarkers displayed on the cancer cells and cancer stem cells. Specificity of cancer therapeutic effects is further enhanced by designing the DNA constructs, which put the therapeutic genes under the control of the cancer cell specific promoters. The delivery of the suicidal genes to the cancer cells involves viral, as well as synthetic vectors, which are guided by cancer specific antibodies and ligands. The delivery options also include engineered stem cells with tropisms towards cancers. Main mechanisms inducing cancer cells' deaths include: transgenic expression of thymidine kinases, cytosine deaminases, intracellular antibodies, telomeraseses, caspases, DNases. Precautions are undertaken to eliminate the risks associated with transgenesis. Progress in genomics and proteomics should help us in identifying the cancer specific biomarkers and metabolic pathways for developing new strategies towards clinical trials of targeted and personalized gene therapy of cancer. By introducing the gene into a malignant tumor, the tumor would reduce in size and possibly disappear completely, provided all the individual cells have received a copy of the gene.
When the DNA sample in the virus is taken from the patient's own healthy cells, the virus does not need to be able to differentiate between cancer cells and healthy ones. In addition, the advantage is that it is also able to prevent metastasis upon the death of a tumor.
As a cancer treatment
One of the challenges of cancer treatment is how to destroy malignant tumors without damaging healthy cells. A new method that shows great promise for accomplishing this employs the use of a suicide gene. A suicide gene is a gene which will cause a cell to kill itself through apoptosis. Suicide gene therapy involves delivery of a gene which codes for a cytotoxic product into tumor cells. This can be achieved by two approaches, indirect gene therapy and direct gene therapy. Indirect gene therapy employs enzyme-activated prodrug, in which the enzyme converts the prodrug to a toxic substance and the gene coding for this enzyme is delivered to the tumor cells. For example, a commonly studied strategy based on transfection of herpes simplex virus thymidine kinase (HSV-TK) along with administration of ganciclovir (GSV), in which HSK-TK assists in converting GCV to a toxic compound that inhibits DNA synthesis and causes cell death. Whereas, direct gene therapy employs a toxin gene or a gene which has the ability to correct mutated proapoptotic genes, which can in turn induce cell death via apoptosis. For instance, the most researched immunotoxin for cancer therapy is the diphtheria toxin as it inhibits protein synthesis by inactivating elongation factor 2 (EF-2) which in turn inhibits protein translation, Moreover, p53 is identified to be frequently abnormal in human tumors and studies show that restoring function of p53 can cause apoptosis of cancer cells. Suicide gene therapy is not necessarily expected to eliminate the need for chemotherapy and radiation treatment for all cancerous tumors. The damage inflicted upon the tumor cells, however, makes them more susceptible to the chemo or radiation. This approach has already proven effective against prostate and bladder cancers. The application of suicide gene therapy is being expanded to several other forms of cancer as well. Cancer patients often experience depressed immune systems, so they can suffer some side effects of the use of a virus as a delivery agent.
Improved vectors
Suicide gene delivery can be broadly classified into three groups which include viral vectors, synthetic vectors and cell-based vectors. The most efficient vehicles for gene delivery are viral vectors. Widely used viruses for gene therapy include retrovirus, adenovirus (Ads), lentivirus and Aden-associated viruses (AAVs). Non-viral vectors like synthetic vectors were used to combat certain disadvantages of viral vectors like immunogenicity, insertional mutagenesis to name a few. Synthetic vectors refer to use of nanoparticles, like gold nanoparticles, to delivery genes to target cells. Lastly, cell-based vectors employ stem cells as carriers of suicide genes. In the last few years, cell-mediated gene therapy for cancer using mesenchymal stem cells (MSCs) was patented.
Bystander effect
The bystander effect (BE) is phenomenon as a result of which it is possible to kill untransfected tumor cells located adjacent to transduced cells in suicide gene therapy. As hundred percent transduction of all tumor cells is very difficult to achieve, BE is critical feature of suicide gene therapy.
Limitations
The drug is supposed to show high specificity towards cancer in order to effective, but studies have shown this to be rarely achieved. Moreover, expression of suicide gene was under control of tumor-specific promoters like human telomerase (hTERT), osteocalcin, carcinoembryonic antigen; however, only hTERT promoter was found to enter clinic trials. This is majorly because of the low transcriptional power of these tumor-specific promoters for suicide gene expression. Additionally, poor accessibility to target cells is an important limitation of suicide gene therapy. Another major hurdle of suicide gene therapy is partial vector specificity to target affected cells. Finally, lack of specific animal models to predict the clinical outcome and other effects of SGT.
Biotechnology
Suicide genes are often utilized in biotechnology to assist in molecular cloning. Vectors incorporate suicide genes for an organism (such as E. coli). The cloning project focuses on replacing the suicide gene by the desired fragment. Selection of vectors carrying the desired fragment is improved since vectors retaining the suicide gene result in cell death.
References
Genes
Programmed cell death
Cellular senescence | Suicide gene | [
"Chemistry",
"Biology"
] | 2,141 | [
"Signal transduction",
"Senescence",
"Cellular senescence",
"Cellular processes",
"Programmed cell death"
] |
5,702,698 | https://en.wikipedia.org/wiki/Protein%E2%80%93ligand%20docking | Protein–ligand docking is a molecular modelling technique. The goal of protein–ligand docking is to predict the position and orientation of a ligand (a small molecule) when it is bound to a protein receptor or enzyme. Pharmaceutical research employs docking techniques for a variety of purposes, most notably in the virtual screening of large databases of available chemicals in order to select likely drug candidates. There has been rapid development in computational ability to determine protein structure with programs such as AlphaFold, and the demand for the corresponding protein-ligand docking predictions is driving implementation of software that can find accurate models. Once the protein folding can be predicted accurately along with how the ligands of various structures will bind to the protein, the ability for drug development to progress at a much faster rate becomes possible.
History
Computer-aided drug design (CADD) was introduced in the 1980s in order to screen for novel drugs. The underlying premise is that by parsing an extremely large data set for chemical compounds which may be viable to make a certain pharmaceutical, researchers were able to minimize the amount of novel without testing them all experimentally. The ability to accurately predict target binding sites is a new phenomena, however, which expands on the ability to simply parse a data set of chemical compounds; now due to increasing computational capability, it is possible to inspect the actual geometries of the protein-ligand binding site in vitro. Hardware advancements in computation have made these structure-oriented methods of drug discovery the next frontier in the 21st century biopharma. In order to finely train the new algorithms to capture the accurate geometry of the protein-ligand binding capability, an experimentally gathered dataset can be used by applying techniques such as X-ray crystallography or NMR spectroscopy.
Available software
Several protein–ligand docking software applications that calculate the site, geometry and energy of small molecules or peptides interacting with proteins are available, such as AutoDock and AutoDock Vina, rDock, FlexAID, Molecular Operating Environment, and Glide. Peptides are a highly flexible type of ligand that has proven to be a difficult type of structure to predict in protein bonding programs. DockThor implements up to 40 rotatable bonds to help model these complex physicochemical bindings at the target site. Root Mean Square Deviation is the standard method of evaluating various software performance within the binding mode of the protein-ligand structure. Specifically, it is the root-mean-squared deviation between the software-predicted docking pose of the ligand and the experimental binding mode. The RMSD measurement is computed for all of the computer-generated poses of the possible bindings between the protein and ligand. The program does not always perfectly predict the actual physical pose when evaluating the RMSD between candidates. In order to then evaluate the strength of a computer algorithm to predict protein docking, the ranking of RMSD among computer-generated candidates must be examined to determine whether the experimental pose actually was generated but not selected.
Protein flexibility
Computational capacity has increased dramatically over the last two decades making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. However, in most of the cases, neglecting it leads to poor docking results in terms of binding pose prediction in real-world settings. Using coarse grained protein models to overcome this problem seems to be a promising approach. Coarse-grained models are often implemented in the case of protein-peptide docking, as they frequently involve large-scale conformation transitions of the protein receptor.
AutoDock is one of the computational tools frequently used to model the interactions between proteins and ligands during the drug discovery process. Although the classically used algorithms to search for effective poses often assume the receptor proteins to be rigid while the ligand is moderately flexible, newer approaches are implementing models with limited receptor flexibility as well. AutoDockFR is a newer model that is able to simulate this partial flexibility within the receptor protein by letting side-chains of the protein to take various poses among their conformational space. This allows the algorithm to explore a vastly larger space of energetically relevant poses for each ligand tested.
In order to simplify the complexity of the search space for prediction algorithms, various hypotheses have been tested. One such hypothesis is that side-chain conformational changes that contain more atoms and rotations of greater magnitude are actually less likely to occur than the smaller rotations due to the energy barriers that arise. Steric hindrance and rotational energy cost that are introduced with these larger changes made it less likely that they were included in the actual protein-ligand pose. Findings such as these can make it easier for scientists to develop heuristics that can lower the complexity of the search space and improve the algorithms.
Implementations
The original method of testing the molecular models of various binding sites was introduced in the 1980s where the receptor was estimated in a rough manner by spheres which occupied the surface clefts. The ligand was approximated by more spheres which would occupy the relevant volume. Then a search was executed for maximizing the steric overlap between the spheres of both the binding and receptor spheres.
However, the new scoring functions to evaluate molecular dynamics and protein-ligand docking potential are implementing supervised molecular dynamic approach. Essentially, the simulations are sequences of small time windows by which the distance between the center of mass of the ligand and protein is computed. The distance values are updated at regular frequencies and then regressively fitted linearly. When the slope is negative, the ligand is getting nearer to the binding site, and vice versa. When the ligand is departing from the binding site, the tree of possibilities is pruned right at that moment so as to avoid unnecessary computation. The advantage of this method is speed without the introduction of any energetic bias which could foul the model from accurate mappings to the experimental truths.
See also
Docking (molecular)
Protein–protein docking
Virtual screening
List of protein-ligand docking software
References
External links
BioLiP, a comprehensive ligand-protein interaction database
DockThor
Molecular modelling
Computational chemistry
Cheminformatics | Protein–ligand docking | [
"Chemistry"
] | 1,248 | [
"Molecular physics",
"Theoretical chemistry",
"Molecular modelling",
"Computational chemistry",
"nan",
"Cheminformatics"
] |
5,703,338 | https://en.wikipedia.org/wiki/Delta%20robot | A delta robot is a type of parallel robot that consists of three arms connected to universal joints at the base. The key design feature is the use of parallelograms in the arms, which maintains the orientation of the end effector. In contrast, a Stewart platform can change the orientation of its end effector.
Delta robots have popular usage in picking and packaging in factories because they can be quite fast, some executing up to 300 picks per minute.
History
The delta robot (a parallel arm robot) was invented in the early 1980s by a research team led by professor Reymond Clavel at the École Polytechnique Fédérale de Lausanne (EPFL, Switzerland). After a visit to a chocolate maker, a team member wanted to develop a robot to place pralines in their packages. The purpose of this new type of robot was to manipulate light and small objects at a very high speed, an industrial need at that time.
In 1987, the Swiss company Demaurex purchased a license for the delta robot and started the production of delta robots for the packaging industry. In 1991, Reymond Clavel presented his doctoral thesis 'Conception d'un robot parallèle rapide à 4 degrés de liberté', and received the golden robot award in 1999 for his work and development of the delta robot. Also in 1999, ABB Flexible Automation started selling its delta robot, the FlexPicker. By the end of 1999, delta robots were also sold by Sigpack Systems.
In 2017, researchers from Harvard's Microrobotics Lab miniaturized it with piezoelectric actuators to 0.43 grams for 15 mm x 15 mm x 20 mm, capable of moving a 1.3 g payload around a 7 cubic millimeter workspace with a 5 micrometers precision, reaching 0.45 m/s speeds with 215 m/s² accelerations and repeating patterns at 75 Hz.
Design
The delta robot is a parallel robot, i.e. it consists of multiple kinematic chains connecting the base with the end-effector. The robot can also be seen as a spatial generalisation of a four-bar linkage.
The key concept of the delta robot is the use of parallelograms which restrict the movement of the end platform to pure translation, i.e. only movement in the X, Y or Z direction with no rotation.
The robot's base is mounted above the workspace and all the actuators are located on it. From the base, three middle jointed arms extend. The ends of these arms are connected to a small triangular platform. Actuation of the input links will move the triangular platform along the X, Y or Z direction. Actuation can be done with linear or rotational actuators, with or without reductions (direct drive).
Since the actuators are all located in the base, the arms can be made of a light composite material. As a result of this, the moving parts of the delta robot have a small inertia. This allows for very high speed and high accelerations. Having all the arms connected together to the end-effector increases the robot stiffness, but reduces its working volume.
The version developed by Reymond Clavel has four degrees of freedom: three translations and one rotation. In this case a fourth leg extends from the base to the middle of the triangular platform giving to the end effector a fourth, rotational degree of freedom around the vertical axis.
Currently other versions of the delta robot have been developed:
Delta with 6 degrees of freedom: developed by the Fanuc company, in this robot a serial kinematic with 3 rotational degrees of freedom is placed on the end effector
Delta with 4 degrees of freedom: developed by the Adept company, this robot has 4 parallelogram directly connected to the end-platform instead of having a fourth leg coming in the middle of the end-effector
Pocket Delta: developed by the Swiss company Asyril SA, a 3-axis version of the delta robot adapted for flexible part feeding systems and other high-speed, high-precision applications.
Delta direct drive: a 3 degrees of freedom delta robot having the motor directly connected to the arms. Accelerations can be very high, from 30 up to 100 g.
Delta cube: developed by the EPFL university laboratory LSRO, a delta robot built in a monolithic design, having flexure-hinges joints. This robot is adapted for ultra-high-precision applications.
Several "linear delta" arrangements have been developed where the motors drive linear actuators rather than rotating an arm. Such linear delta arrangements can have much larger working volumes than rotational delta arrangements.
The majority of delta robots use rotary actuators. Vertical linear actuators have recently been used (using a linear delta design) to produce a novel design of 3D printer. These offer advantages over conventional leadscrew-based 3D printers of quicker access to a larger build volume for a comparable investment in hardware.
Applications
Industries that take advantage of the high speed of delta robots are the food, pharmaceutical and electronics industry. For its stiffness it is also used for surgery, in particular, the Surgiscope is a delta robot used as a microscopic holder system.
The structure of a delta robot can also be used to create haptic controllers. More recently, the technology has been adapted to 3D printers.
References
Robot kinematics
Industrial robots
Parallel robots | Delta robot | [
"Engineering"
] | 1,105 | [
"Industrial robots",
"Robotics engineering",
"Robot kinematics"
] |
5,703,563 | https://en.wikipedia.org/wiki/Booster%20dose | A booster dose is an extra administration of a vaccine after an earlier (primer) dose. After initial immunization, a booster provides a re-exposure to the immunizing antigen. It is intended to increase immunity against that antigen back to protective levels after memory against that antigen has declined through time. For example, tetanus shot boosters are often recommended every 10 years, by which point memory cells specific against tetanus lose their function or undergo apoptosis.
The need for a booster dose following a primary vaccination is evaluated in several ways. One way is to measure the level of antibodies specific against a disease a few years after the primary dose is given. Anamnestic response, the rapid production of antibodies after a stimulus of an antigen, is a typical way to measure the need for a booster dose of a certain vaccine. If the anamnestic response is high after receiving a primary vaccine many years ago, there is most likely little to no need for a booster dose. People can also measure the active B and T cell activity against that antigen after a certain amount of time that the primary vaccine was administered or determine the prevalence of the disease in vaccinated populations.
If a patient receives a booster dose but already has a high level of antibody, then a reaction called an Arthus reaction could develop, a localized form of Type III hypersensitivity induced by high levels of IgG antibodies causing inflammation. The inflammation is often self-resolved over the course of a few days but could be avoided altogether by increasing the length of time between the primary vaccine and the booster dose.
It is not yet fully clear why some vaccines such as hepatitis A and B are effective for life, and some such as tetanus need boosters. The prevailing theory is that if the immune system responds to a primary vaccine rapidly, the body does not have time to sufficiently develop immunological memory against the disease, and memory cells will not persist in high numbers for the lifetime of the human. After a primary response of the immune system against a vaccination, memory T helper cells and B cells persist at a fairly constant level in germinal centers, undergoing cell division at a slow to nonexistent rate. While these cells are long-lived, they do not typically undergo mitosis, and eventually, the rate of loss of these cells will be greater than the rate of gain. In these cases, a booster dose is required to "boost" the memory B and T cell count back up again.
Polio booster doses
In the case of the polio vaccine, the memory B and T cells produced in response to the vaccine persist only six months after consumption of the oral polio vaccine (OPV). Booster doses of the OPV were found ineffective, as they, too, resulted in decreased immune response every six months after consumption. However, when the inactive polio vaccine (IPV) was used as a booster dose, it was found to increase the test subjects' antibody count by 39–75%. Often in developing countries, OPV is used over IPV, because IPV is expensive and hard to transport. Also, IPVs in tropical countries are hard to store due to the climate. However, in places where polio is still present, following up an OPV primary dose with an IPV booster may help eradicate the disease.
In the United States, only the IPV is used. In rare cases (about 1 in 2.7 million), the OPV has reverted to a strengthened form of the illness, and caused paralysis in the recipients of the vaccine. For this reason, the US only administers IPV, which is given in four increments (3 within their first year and a half after birth, then one booster dose between the ages 4–6).
Hepatitis B booster doses
The need for a booster dose for hepatitis B has long been debated. Studies in the early 2000s that measured memory cell count of vaccinated individuals showed that fully vaccinated adults (those that received all three rounds of vaccination at the suggested time sequence during infancy) do not require a booster dose later in life. Both the United States Centers for Disease Control (CDC) and the Canadian National Advisory Committee on Immunization (NACI) supported these recommendations by publicly advising against the need for a hepatitis B booster dose. However, immuno-repressed individuals are advised to seek further screening to evaluate their immune response to hepatitis B, and potentially receive a booster dose if their B and T cell count against hepatitis B decrease below a certain level.
Tetanus booster dose
The tetanus disease requires a booster dose every 10 years, or in some circumstances immediately following infection of tetanus. Td is the name of the booster for adults, and differs from the primary dose in that it does not include immunization against pertussis (whooping cough). While the US recommends a booster for tetanus every 10 years, other countries, such as the UK, suggest just two booster shots within the first 20 years of life, but no booster after a third decade. Neonatal tetanus is a concern during pregnancy for some women, and mothers are recommended a booster against tetanus during their pregnancy in order to protect their child against the disease.
Whooping cough booster dose
Whooping cough, also called pertussis, is a contagious disease that affects the respiratory tract. The infection is caused by a bacterium that sticks to the cilia of the upper respiratory tract and can be very contagious. Pertussis can be especially dangerous for babies, whose immune systems are not yet fully developed, and can develop into pneumonia or result in the baby having trouble breathing. DTaP is the primary vaccine given against pertussis, and children typically receive five doses before the age of seven. Tdap is the booster for pertussis, and is advised in the US to be administered every ten years, and during every pregnancy for mothers. Tdap can also be used as a booster against tetanus.
Upon its invention in the 1950s, the pertussis vaccine was whole-cell (contained the entire inactivated bacterium), and could cause fever and local reactions in people who received the vaccine. In the 1990s, people in the US started using acellular vaccines (contained small portions of the bacterium), that had lower side effects but were also less effective at triggering an immunological memory response, due to the antigen presented to the immune system being less complete. This less effective, but safer vaccine, led to the development of the booster Tdap.
COVID-19 booster dose
, protection against severe disease remained high at 6 months after vaccination despite lower efficacy in protection from COVID-19 infection. An international panel of scientists affiliated with the FDA, WHO, and several universities and healthcare institutions, concluded that there was insufficient data to determine the long-term protective benefits of a booster dose (only short-term protective effects were observed), and recommended instead that existing vaccine stock would save most lives if made available to people who had not received any vaccine.
Israel first rolled out booster doses of the Pfizer–BioNTech COVID-19 vaccine for at-risk populations in July 2021. In August this was expanded for the rest of the Israeli population. Effectiveness against severe disease in Israel was lower among people vaccinated either in January or April than in those vaccinated in February or March. During the first 3 weeks of August 2021, just after booster doses were approved and began to be deployed widely, a short-term protective effect of a third dose (relative to two doses) was suggested.
In the United States, the CDC rolled out booster shots to immunocompromised individuals during the summer of 2021 and originally planned to allow adults to receive a third dose of the COVID-19 vaccine starting in September 2021, with individuals becoming eligible starting 8 months after their second dose (for those who received a two-dose vaccine). After further data about long-term vaccine efficacy and the delta variant came to light, the CDC ultimately made recipients eligible for boosters 6 months after the second shot, in late October. Subsequently, vaccinations in the country surged.
In September 2021, the UK's Joint Committee on Vaccination and Immunisation recommended a booster shot for the over-50s and at-risk groups, preferably the Pfizer–BioNTech vaccine, meaning about 30 million adults should receive a third dose. The UK's booster rollout was extended to over-40s in November 2021.
Russia's Sputnik V COVID-19 vaccine, using similar technology to AstraZeneca's COVID-19 vaccine, in November 2021 introduced a COVID-19 booster called Sputnik Light, which according to a study by the Gamaleya Research Institute of Epidemiology and Microbiology has an effectiveness of 70% against the delta variant. It can be combined with all other vaccines and may be more effective with mRNA vaccines than mRNA boosters.
Booster shots can also be used after infections. In this regard, the UK's National Health Service recommends people to wait 28 days after testing positive for COVID-19 before getting their booster shots. Evidence shows that getting a vaccine after recovery from a COVID-19 infection provides added protection to the immune system.
References
Vaccination | Booster dose | [
"Biology"
] | 1,917 | [
"Vaccination"
] |
5,703,638 | https://en.wikipedia.org/wiki/BBGKY%20hierarchy | In statistical physics, the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) hierarchy (sometimes called Bogoliubov hierarchy) is a set of equations describing the dynamics of a system of a large number of interacting particles. The equation for an s-particle distribution function (probability density function) in the BBGKY hierarchy includes the (s + 1)-particle distribution function, thus forming a coupled chain of equations. This formal theoretic result is named after Nikolay Bogolyubov, Max Born, Herbert S. Green, John Gamble Kirkwood, and .
Formulation
The evolution of an N-particle system in absence of quantum fluctuations is given by the Liouville equation for the probability density function in 6N-dimensional phase space (3 space and 3 momentum coordinates per particle)
where are the position and momentum for -th particle with mass , and the net force acting on the -th particle is
where is the pair potential for interaction between particles, and is the external-field potential. By integration over part of the variables, the Liouville equation can be transformed into a chain of equations where the first equation connects the evolution of one-particle probability density function with the two-particle probability density function, second equation connects the two-particle probability density function with the three-particle probability density function, and generally the s-th equation connects the s-particle probability density function
with the (s + 1)-particle probability density function:
The equation above for s-particle distribution function is obtained by integration of the Liouville equation over the variables . The problem with the above equation is that it is not closed. To solve , one has to know , which in turn demands to solve and all the way back to the full Liouville equation. However, one can solve , if could be modeled. One such case is the Boltzmann equation for , where is modeled based on the molecular chaos hypothesis (). In fact, in the Boltzmann equation is the collision integral. This limiting process of obtaining Boltzmann equation from Liouville equation is known as Boltzmann–Grad limit.
Physical interpretation and applications
Schematically, the Liouville equation gives us the time evolution for the whole -particle system in the form , which expresses an incompressible flow of the probability density in phase space. We then define the reduced distribution functions incrementally by integrating out another particle's degrees of freedom . An equation in the BBGKY hierarchy tells us that the time evolution for such a is consequently given by a Liouville-like equation, but with a correction term that represents force-influence of the suppressed particles
The problem of solving the BBGKY hierarchy of equations is as hard as solving the original Liouville equation, but approximations for the BBGKY hierarchy (which allow truncation of the chain into a finite system of equations) can readily be made. The merit of these equations is that the higher distribution functions affect the time evolution of only implicitly via Truncation of the BBGKY chain is a common starting point for many applications of kinetic theory that can be used for derivation of classical or quantum kinetic equations. In particular, truncation at the first equation or the first two equations can be used to derive classical and quantum Boltzmann equations and the first order corrections to the Boltzmann equations. Other approximations, such as the assumption that the density probability function depends only on the relative distance between the particles or the assumption of the hydrodynamic regime, can also render the BBGKY chain accessible to solution.
Bibliography
s-particle distribution functions were introduced in classical statistical mechanics by J. Yvon in 1935. The BBGKY hierarchy of equations for s-particle distribution functions was written out and applied to the derivation of kinetic equations by Bogoliubov in the article received in July 1945 and published in 1946 in Russian and in English. The kinetic transport theory was considered by Kirkwood in the article received in October 1945 and published in March 1946, and in the subsequent articles. The first article by Born and Green considered a general kinetic theory of liquids and was received in February 1946 and published on 31 December 1946.
See also
Fokker–Planck equation
Vlasov equation
Cluster-expansion approach
References
Statistical mechanics
Non-equilibrium thermodynamics
Max Born | BBGKY hierarchy | [
"Physics",
"Mathematics"
] | 888 | [
"Non-equilibrium thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
5,705,108 | https://en.wikipedia.org/wiki/Borohydride | Borohydride refers to the anion , which is also called tetrahydridoborate, and its salts. Borohydride or hydroborate is also the term used for compounds containing , where n is an integer from 0 to 3, for example cyanoborohydride or cyanotrihydroborate and triethylborohydride or triethylhydroborate . Borohydrides find wide use as reducing agents in organic synthesis. The most important borohydrides are lithium borohydride and sodium borohydride, but other salts are well known (see Table). Tetrahydroborates are also of academic and industrial interest in inorganic chemistry.
History
Alkali metal borohydrides were first described in 1940 by Hermann Irving Schlesinger and Herbert C. Brown. They synthesized lithium borohydride from diborane :
, where M = Li, Na, K, Rb, Cs, etc.
Current methods involve reduction of trimethyl borate with sodium hydride.
Structure
In the borohydride anion and most of its modifications, boron has a tetrahedral structure. The reactivity of the B−H bonds depends on the other ligands. Electron-releasing ethyl groups as in triethylborohydride render the B−H center highly nucleophilic. In contrast, cyanoborohydride is a weaker reductant owing to the electron-withdrawing cyano substituent. The countercation also influences the reducing power of the reagent.
Uses
Sodium borohydride is the borohydride that is produced on the largest scale industrially, estimated at 5000 tons/year in 2002. The main use is for the reduction of sulfur dioxide to give sodium dithionite:
Dithionite is used to bleach wood pulp. Sodium borohydride is also used to reduce aldehydes and ketones in the production of pharmaceuticals including chloramphenicol, thiophenicol, vitamin A, atropine, and scopolamine, as well as many flavorings and aromas.
Potential applications
Because of their high hydrogen content, borohydride complexes and salts have been of interest in the context of hydrogen storage. Reminiscent of related work on ammonia borane, challenges are associated with slow kinetics and low yields of hydrogen as well as problems with regeneration of the parent borohydrides.
Coordination complexes
In its coordination complexes, the borohydride ion is bound to the metal by means of one to three bridging hydrogen atoms. In most such compounds, the ligand is bidentate. Some homoleptic borohydride complexes are volatile. One example is uranium borohydride.
Metal borohydride complexes can often be prepared by a simple salt elimination reaction:
Beryllium borohydride is dimeric.
Decomposition
Some metal tetrahydroborates transform on heating to give metal borides. When the borohydride complex is volatile, this decomposition pathway is the basis of chemical vapor deposition (CVD), a way of depositing thin films of metal borides. For example, zirconium diboride and hafnium diboride can be prepared through CVD of the zirconium(IV) tetrahydroborate and hafnium(IV) tetrahydroborate :
Metal diborides find uses as coatings because of their hardness, high melting point, strength, resistance to wear and corrosion, and good electrical conductivity.
References
External links
Sodium Tetrahydroborate
Anions | Borohydride | [
"Physics",
"Chemistry"
] | 778 | [
"Ions",
"Matter",
"Anions"
] |
5,706,028 | https://en.wikipedia.org/wiki/Timing%20margin | Timing margin is an electronics term that defines the difference between the actual change in a signal and the latest time at which the signal can change in order for an electronic circuit to function correctly. It is used in the design of digital electronics.
Illustration
In this image, the lower signal is the clock and the upper signal is the data. Data is recognized by the circuit at the positive edge of the clock. There are two time intervals illustrated in this image. One is the setup time, and the other is the timing margin. The setup time is illustrated in red in this image; the timing margin is illustrated in green.
The edges of the signals can shift around in a real-world electronic system for various reasons. If the clock and the data signal are shifted relative to each other, this may increase or reduce the timing margin; as long as the data signal changes before the setup time is entered, the data will be interpreted correctly. If it is known from experience that the signals can shift relative to each other by as much as 2 microseconds, for instance, designing the system with at least 2 microseconds of timing margin will prevent incorrect interpretation of the data signal by the receiver.
If the physical design of the circuit is changed, for example by giving more wire that the clock signal is transmitted on, the edge of the data signal will move closer to the positive edge of the clock signal, reducing the timing margin. If the signals have been designed with enough timing margin, only the correct data will be received.
See also
Static timing analysis
References
Electrical engineering | Timing margin | [
"Engineering"
] | 316 | [
"Electrical engineering"
] |
5,706,520 | https://en.wikipedia.org/wiki/Trinucleotide%20repeat%20expansion | A trinucleotide repeat expansion, also known as a triplet repeat expansion, is the DNA mutation responsible for causing any type of disorder categorized as a trinucleotide repeat disorder. These are labelled in dynamical genetics as dynamic mutations. Triplet expansion is caused by slippage during DNA replication, also known as "copy choice" DNA replication. Due to the repetitive nature of the DNA sequence in these regions, 'loop out' structures may form during DNA replication while maintaining complementary base pairing between the parent strand and daughter strand being synthesized. If the loop out structure is formed from the sequence on the daughter strand this will result in an increase in the number of repeats. However, if the loop out structure is formed on the parent strand, a decrease in the number of repeats occurs. It appears that expansion of these repeats is more common than reduction. Generally, the larger the expansion the more likely they are to cause disease or increase the severity of disease. Other proposed mechanisms for expansion and reduction involve the interaction of RNA and DNA molecules.
In addition to occurring during DNA replication, trinucleotide repeat expansion can also occur during DNA repair. When a DNA trinucleotide repeat sequence is damaged, it may be repaired by processes such as homologous recombination, non-homologous end joining, mismatch repair or base excision repair. Each of these processes involves a DNA synthesis step in which strand slippage might occur leading to trinucleotide repeat expansion.
The number of trinucleotide repeats appears to predict the progression, severity, and age of onset of Huntington's disease and similar trinucleotide repeat disorders. Other human diseases in which triplet repeat expansion occurs are fragile X syndrome, several spinocerebellar ataxias, myotonic dystrophy and Friedreich's ataxia.
History
The first documentation of anticipation in genetic disorders was in the 1800s. However, from the eyes of geneticists, this relationship was disregarded and attributed to ascertainment bias; because of this, it took almost 200 years for a link between onset of disease and trinucleotide repeats (TNR) to be acknowledged.
The following findings of served as support for TNR's link to onset of disease; the detection of various repeats within these diseases demonstrated this relationship.
In 1991, for fragile X syndrome, the fragile X mental retardation 1 (FMR-1) gene was found to contain a CGG expansion in its 5' untranslated region (UTR). In addition, a CAG expansion was located in X-linked spinal and bulbar muscular atrophy (SBMA) sequences. SMBA is the first "CAG / polygutamine" disease, which is a subcategory of repeat disorders.
In 1992, for myotonic dystrophy type 1 (DM1), CTG expansion was found in the myotonic dystrophy protein kinase (DMPK) 3' UTR.
In 1993, for Huntington's disease (HD), a longer-than-usual CAG repeat with was found in the exon 1 coding sequence.
Because of these discoveries, ideas involving anticipation in disease began to develop, and curiosity formed about how the causes could be related to TNRs. After the breakthroughs, the four mechanisms for TNRs were determined, and more types of repeats were identified as well. Repeat composition and location are used to determine the mechanism of a given expansion. Onwards from 1995, it was also possible to observe the formation of hairpins in triplet repeats, which consisted of repeating CG pairs and a mismatch.
During the decade after evidence that linked TNR to onset of disease was found, focus was placed on studying repeat length and dynamics on diseases, as well as investigating the mechanism behind parent-child disease inheritance. Research has shown that there is a clear inverse relationship between the length of the repeats in parents and the age of disease onset in children; therefore, the lengths of TNRs are used to predict age of disease onset as well as outcome in clinical diagnosis. In addition to this finding, another aspect of the diseases, the high variability of onset, was revealed. Although the onset of HD could be predicted by examining TNR length inheritance, the onset could vary up to fourfold depending on the patient, leading to the possibility of existence of age-modifying factors for disease onset; there were notable efforts in this search. Currently, CAG repeat length is considered the biggest onset age modifier for TNR diseases.
Detection of TNRs was made difficult by limited technology and methods early on, and years passed before the development of sufficient ways to measure the repeats. When PCR was first attempted in the detection of TNRs, multiple band artifacts were prevalent in the results, and this made recognition of TNRs troublesome; at the time, debate centered around whether disease was brought on by smaller amounts of short expansions or a small amount of long expansions. Since then, accurate methods have been established over the years. Together, the following clinically necessary protocols have 99% accuracy in measuring TNRs.
Small-pool polymerase chain reaction (SP-PCR) allows for recognition of repeat changes, and originated from the growing necessity for a method that would provide more accurate measurement of TNRs. It has been useful for examining how TNRs vary between human and mice in blood, sperm, and somatic cells.
Southern blots are used to measure CGG repeats because CG-rich regions limit polymerase movement in PCR.
Overall structure
These repetitive sequences lead to instability amongst the DNA strands after reaching a certain threshold number of repeats, which can result in DNA slippage during replication. The most common and well-known triplet repeats are CAG, GCG, CTG, CGG, and GAA. During DNA replication, the strand being synthesized can misalign with its template strand due to the dynamic nature and flexibility of these triplet repeats. This slippage allows for the strand to find a stable intermediate amongst itself through base pairing, forming a secondary structure other than a duplex.
Location
In terms of location, these triplet repeats can be found in both coding and non-coding regions. CAG and GCN repeats, which lead to polyglutamine and polyalanine tracts respectively, are normally found in the coding regions. At the 5' untranslated region, CGG and CAG repeats are found and responsible for fragile X syndrome and spinocerebellar ataxia 12. At the 3' untranslated region, CTG repeats are found, while GAA repeats are located in the intron region. Other disease-causing repeats, but not triplet repeats, have been located in the promoter region. Once the number of repeats exceeds normal levels, Triplet Repeat Expansions (TRE) become more likely and the number of triplet repeats can typically increase to around 100 in coding regions and up to thousands in non-coding regions. This difference is due to overexpression of glutamine and alanine, which is selected against due to cell toxicity.
Intermediates
Depending on the sequence of the repeat, at least three intermediates with different secondary structures are known to form. A CGG repeat will form a G-quadruplex due to Hoogsteen base pairing, while a GAA repeat forms a triplex due to negative supercoiling. CAG, CTG, and CGG repeats form a hairpin. After the hairpin forms, the primer realigns with the 3' end of the newly synthesized strand and continues the synthesis, leading to triplet repeat expansion. The structure of the hairpin is based on a stem and a loop that contains both Watson-Crick base pairs and mismatched pairs. In CTG and CAG repeats, the number of nucleotides present in the loop depends on if the number of triplet repeats is odd or even. An even number of repeats forms a tetraloop structure, while an odd number leads to the formation of a triloop.
Instability
Threshold
In trinucleotide repeat expansion there is a certain threshold or maximum amount of repeats that can occur before a sequence becomes unstable. Once this threshold is reached the repeats will start to rapidly expand causing longer and longer expansions in future generations. Once it hits this minimum allele size which is normally around 30-40 repeats, diseases and instability can be contracted, but if the number of repeats found within a sequence are below the threshold it will remain relatively stable. There is still not enough research found to understand the molecular nature that causes thresholds but researchers are continuing to study that the possibility could lie with the formation of the secondary structure when these repeats occur. It was found that diseases associated with trinucleotide repeat expansions contained secondary structures with hairpins, triplexes, and slipped-strand duplexes. These observations have led to the hypothesis that the threshold is determined by the number of repeats that must occur to stabilize the formation of these unwanted secondary structures, due to the fact that when these structures form there is an increased number of mutations that will form in the sequence resulting in more trinucleotide expansion.
Parental influence
Research suggests that there is a direct, important correlation between the sex of the parent that transmits the mutation and the degree and phenotype of disorder in the child., The degree of repeat expansion and whether or not an expansion will occur has been directly linked to the sex of the transmitting parent in both non-coding and coding trinucleotide repeat disorders. For example, research regarding the correlation between Huntington's Disease CAG trinucleotide repeat and parental transmission has found that there is a strong correlation between the two with differences in maternal and paternal transmission. Maternal transmission has been observed to only consist of an increase in repeat units of 1 while the paternal transmission is typically anywhere from 3 to 9 extra repeats. Paternal transmission is almost always responsible for large repeat transmission resulting in the early onset of Huntington's Disease while maternal transmission results in affected individuals experiencing symptom onset mirroring that of their mother., While this transmission of a trinucleotide repeat expansion is regarded to be a result of "meiotic instability", the degree to which meiosis plays a role in this process and the mechanism is not clear and numerous other processes are predicted to simultaneously play a role in this process.
Mechanisms
Unequal homologous exchange
One proposed but highly unlikely mechanism that plays a role in trinucleotide expansion transmission occurs during meiotic or mitotic recombination. It is suggested that during these processes it is possible for a homologous repeat misalignment, commonly known for causing alpha-globin locus deletions, causes the meiotic instability of a trinucleotide repeat expansion. This process is unlikely to contribute to the transmission and presence of trinucleotide repeat expansions due to differences in expansion mechanisms. Trinucleotide repeat expansions typically favor expansions of the CAG region but, in order for the unequal homologous exchange to be a plausible suggestion, these repeats would have to go through expansion and contraction events at the same time. In addition, numerous diseases that result from transmitted trinucleotide repeat expansions, such as Fragile X syndrome, involve unstable trinucleotide repeats on the X chromosome that cannot be explained by meiotic recombination. Research has shown that although unequal homologous recombination is unlikely to be the sole cause of transmitted trinucleotide repeat expansions, this homologous recombination likely plays a minor role in the length of some trinucleotide repeat expansions.
DNA replication
DNA replication errors are predicted to be the main perpetrator of trinucleotide repeat expansion transmission in many predicted models due to the difficulty of Trinucleotide Repeat Expansion (TRE). TREs have been shown to occur during DNA replication in both in vitro and in vivo studies, allowing for these long tracts of triplet repeats to assemble rapidly in different mechanisms that can result in either small scale or large scale expansions.
Small scale expansions
These expansions can occur through either strand slippage or flap ligation. Okazaki fragments are a key element of the proposed error in DNA replication. It is suggested that the small size of Okazaki fragments, typically between 150 and 200 nucleotides long, makes them more likely to fall off or "slip" off the lagging strand, which creates room for trinucleotide repeats to attach to the lagging strand copy. In addition to this possibility of trinucleotide repeat expansion changes occurring due to slippage of Okazaki fragments, the ability of CG-rich trinucleotide repeat expansion sequences to form a special hairpin, toroid, and triplex DNA structures contributes to this model, suggesting error occurs during DNA replication. Hairpin structures can form as a result of the freedom of the lagging strand during DNA replication and are typically observed to form in extremely long trinucleotide repeat sequences. Research has found that this hairpin formation depends on the orientation of the trinucleotide repeats within each CAG/CTG trinucleotide strand. Strands that have duplex formation by CTG repeats in the leading strand are observed to result in extra repeats, while those without CTG repeats in the leading strand result in repeat deletions. These intermediates can pause activity of the replication fork based on their interaction with DNA polymerases through strand slippage. Contractions occur when the replication fork skips over the intermediate on the Okazaki fragment. Expansions occur when the fork reverses and restarts, which forms a chicken-foot structure. This structure results in the unstable intermediate forming on the nascent leading strand, leading to further TRE. Furthermore, this intermediate can avoid mismatch repair due to its affinity for the MSH-2-MSH3 complex, which stabilizes the hairpin instead of repairing it. In non-dividing cells, a process called flap-ligation can be responsible for TRE. 8-oxo-guanine DNA glycosylase removes a guanine and forms a nick in the sequence. The coding strand then forms a flap due to displacement, which prevents removal by an endonuclease. When the repair process finishes for either mechanism, the length of the expansion is equivalent to the number of triplet repeats involved in the formation of the hairpin intermediate.
Large scale expansions
Two mechanisms have been proposed for large scale repeats: template switching and break-induced replication.
Template switching, a mechanism for large scale GAA repeats that can double the number of triplet repeats, has been proposed. GAA repeats expand when their repeat length is greater than the Okazaki fragment's length. These repeats are involved in the stalling of the replication fork as these repeats form a triplex when the 5' flap of TTC repeats fold back. Okazaki fragment synthesis continues when the template is switched to the nascent leading strand. The Okazaki fragment eventually ligates back to the 5' flap, which results in TRE.
A different mechanism, based on break-induced replication, has been proposed for large scale CAG repeats and can also occur in non-dividing cells. At first, this mechanism follows the same process as the small scale strand slippage mechanism until replication fork reversal. An endonuclease then cleaves the chicken-foot structure, which results in a one-ended double strand break. The CAG repeat of this broken daughter strand forms a hairpin and invades the CAG strand on the sister chromatid, which results in expansion of this repeat in a migrating D-loop DNA synthesis. This synthesis continues until it reaches the replication fork and is cleaved, which results in an expanded sister chromatid.
Disorders
Fragile X syndrome
Background
Fragile X syndrome is the second most common form of intellectual disability affecting 1 in 2,000-4,000 women and 1 in 4,000-8,000 men, women being twice as likely to inherit this disability due to their XX chromosomes. This disability arises from a mutation at the end of the X chromosome in the FMR1 gene (fragile X mental retardation gene) which produces a protein essential for brain development called FMRP. Individuals with fragile X syndrome experience a variety of symptoms at varying degrees that depend on gender and mutation degree such as attention deficit disorders, irritability, stimuli sensitivity, various anxiety disorders, depression, and/or aggressive behavior. Some treatments for these symptoms seen in individuals with Fragile X syndrome include SSRI's, antipsychotic medications, stimulants, folic acid, and mood stabilizers.
Genetic causation
Fragile X syndrome is caused by expansion of CGG repeats in the FMR1 gene. In males without fragile X syndrome, the CGG repeat number ranges from 53 to 200 while those affected have greater than 200 repeats of this trinucleotide sequence located at the end of the X chromosome on band Xq28.3.1. Carriers that have repeats falling within the 53 to 200 repeat range are said to have "premutation alleles", as the alleles within this range approach 200, the likelihood of expansion to a full mutation increases, and the mRNA levels are elevated five-fold. Research has shown that individuals with premutation alleles in the range of 59-69 repeats have about a 30% risk of developing full mutation and compared to those in the high range of ≥ 90 repeats. Fragile X syndrome carriers (those that fall within the premutation range) typically have unmethylated alleles, normal phenotype, and normal levels of FMR1 mRNA and FMRP protein. Fragile X syndrome men possess alleles in the full mutation range (>200 repeats) with FMRP protein levels much lower than normal and experience hypermethylation of the promoter region of the FMR1 gene. Some men with alleles in the full mutation range experience partial or no methylation which results in only slightly abnormal phenotypes due to only slight down-regulation of FMR1 gene transcription. Unmethylated and partially methylated alleles in the mutation range experience increased and normal levels of FMR1 mRNA when compared to normal controls. In contrast, when unmethylated alleles reach a repeat number of approximately 300, the transcription levels are relatively unaffected and operate at normal levels; the transcription levels of repeats greater than 300 is currently unknown.
Promoter silencing
The CGG trinucleotide repeat expansion is present within the FMR1 mRNA and its interactions are responsible for promoter silencing. The CGG trinucleotide expansion resides within the 5' untranslated region of the mRNA, which undergoes hybridization to form a complementary CGG repeat portion. The binding of this genomic repeat to the mRNA results in silencing of the promoter. Beyond this point, the mechanism of promoter silencing is unknown and still being further investigated.
Huntington's disease
Background
Huntington's disease (HD) is a dominantly, paternally transmitted neurological disorder that affects 1 in 15,000-20,000 people in many Western populations. HD involves the basal ganglia and the cerebral cortex and manifests as symptoms such as cognitive, motor, and/or psychiatric impairment.
Causation
This autosomal dominant disorder results from the expansions of a trinucleotide repeat which involves CAG in exon 1 of the IT15 gene. The majority of all juvenile HD cases stem from the transmission of a high CAG trinucleotide repeat number that is a result of paternal gametogenesis. While an individual without HD has a number of CAG repeats that fall within a range between 9 and 37, an individual with HD has CAG is typically found to have repeats in a range between 37 and 102. Research has shown an inverse relationship between the number of trinucleotide repeats and age of onset, however, no relationship between trinucleotide repeat numbers and rate of HD progression and/or effected individual's body weight has been observed. Severity of functional decline has been found to be similar across a wide range of individuals with varying numbers of CAG repeats and differing ages of onset, therefore, it is suggested that the rate of disease progression is also linked to factors other than the CAG repeat such as environmental and/or genetic factors.
Myotonic dystrophy
Background
Myotonic dystrophy is a rare muscular disorder in which numerous bodily systems are affected. There are four forms of Myotonic Dystrophy: mild phenotype and late-onset, onset in adolescence/young adulthood, early childhood featuring only learning disabilities, and a congenital form. Individuals with Myotonic Dystrophy experience severe, debilitating physical symptoms such as muscle weakness, heartbeat issues, and difficulty breathing that can be improved through treatment to maximize patients' mobility and everyday activity to alleviate some stress of their caretakers. The muscles of individuals with Myotonic Dystrophy feature an increase of type 1 fibers as well as an increased deterioration of these type 1 fibers. In addition to these physical ailments, individuals with Myotonic Dystrophy have been found to experience varying internalized disorders such as anxiety and mood disorders as well as cognitive delays, attention deficit disorders, autism spectrum disorders, lower IQ's, and visual-spatial difficulties. Research has shown that there is a direct correlation between expansion repeat number, IQ, and an individual's degree of visual-spatial impairment.
Causation
Myotonic dystrophy results from a (CTG)n trinucleotide repeat expansion that resides in a 3' untranslated region of a serine/threonine kinase coding transcript. This (CTG)n trinucleotide repeat is located within leukocytes; the length of the repeat and the age of the individual have been found to be directly related to disease progression and type 1 muscle fiber predominance. Age and (CTG)n length only have small correlation coefficients to disease progression, research suggests that various other factors play a role in disease progression such as changes in signal transduction pathway, somatic expression, and cell heterogeneity in (CTG)n repeats.
Friedreich's ataxia
Background
Friedreich's ataxia is a progressive neurological disorder. Individuals experience gait and speech disturbances due to degeneration of the spinal cord and peripheral nerves. Other symptoms may include cardiac complications and diabetes. Typical age at symptom onset is 5–15, with symptoms progressively getting worse over time.
Causation
Friedreich's ataxia is an autosomal recessive disorder cause by a GAA expansion in the intron of the FXN gene. This gene codes for the protein frataxin, a mitochondrial protein involved in iron homeostasis. The mutation impairs transcription of the protein, so affected cells produce only 5-10% of the frataxin of healthy cells.
This leads to iron accumulation in the mitochondria, and makes cells vulnerable to oxidative damage.
Research shows that GAA repeat length is correlated with disease severity.
Point of occurrence
Fragile X syndrome
The precise timing of TNR occurrence varies by disease. Although the exact timing for FXS is not certain, research has suggested that the earliest CGG expansions for this disorder are seen in primary oocytes. It has been proposed that the repeat expansion happens in the maternal oocyte during meiotic cell cycle arrest in prophase I, however the mechanism remains nebulous. Maternally inherited premutation alleles may expand into full mutation alleles (greater than 200 repeats), resulting in decreased production of the FMR-1 gene product FMRP and causing fragile X mental retardation syndrome. For females, the large repeat expansions are based upon repair, while for males, the shortening of long repeat expansions is due to replication; therefore, their sperm lack these repeats, and paternal inheritance of long repeat expansions does not occur. Between weeks 13 and 17 of human fetal development, the large CGG repeats are shortened.
Myotonic dystrophy type 1
Many similarities can be drawn between DM1 and FXS involving aspects of mutation. Full maternal inheritance is present within DM1, repeat expansion length is linked to maternal age and the earliest instance of expansions is seen in the two-cell stage of preimplantation embryos. There is a positive correlation between male inheritance and allele length. A study of mice found the exact timing of CTG repeat expansion to be during development of spermatogonia. In DM1 and FXS, it is hypothesized that expansion of TNRs occurs by means of multiple missteps by DNA polymerase in replication. An inability of DNA polymerase to properly move across the TNR may cause transactivation of translesion polymerases (TLPs), which will attempt to complete the replication process and overcome the block. It is understood that as the DNA polymerase fails in this way, the resulting single-stranded loops left behind in the template strand undergo deletion, affecting TNR length. This process leaves the potential for TNR expansions to occur.
Huntington's disease
In Huntington's disease (HD), the exact timing has not been determined; however there are a number of proposed points during germ cell development at which expansion is thought to occur.
In four HD samples examined, CAG repeat expansion lengths were more variable in mature sperm than that of sperm in development in the testes, leading to the conclusion that repeat expansions had a likelihood of occurring later in sperm development.
Repeat expansions have been observed to occur before the completion of meiosis in humans, specifically the first division.
In germ cells undergoing differentiation, evidence suggests it is possible for expansions to generate after the completion of meiosis as well, as larger HD mutations have been found in postmeiotic cells.
Spinocerebellar ataxia type 1
Spinocerebellar ataxia type 1 (SCA1) CAG repeats are most often passed down through paternal inheritance and similarities can be seen with HD. The tract size for offspring of mothers with these repeats does not display any degree of change. Because TNR instability is not present in young female mice, and female SCA1 patient age and instability are directly related, expansions must occur in inactive oocytes. A trend has seemed to emerge of larger expansions occurring in cells inactive in division and smaller expansions occurring in actively dividing or nondividing cells.
Therapeutics
Trinucleotide repeat expansion, is a DNA mutation that is responsible for causing any type of disorder classified as a trinucleotide repeat disorder. These disorders are progressive and affect the sequences of the human genome, frequently within the nervous system. So far the available therapeutics only have modest results at best with emphasis on the research and studying of genomic manipulation. The most advanced available therapies aim to target mutated gene expression by using antisense oligonucleotides (ASO) or RNA interference (RNAi) to target the messenger RNA (mRNA). While solutions for the interventions of this disease is a priority, RNAi and ASO have only reached clinical trial stages.
RNA interference (RNAi)
RNA interference is a mechanism that can be used to silence the expression of genes, RNAi is a naturally occurring process that is leveraged using synthetic small interfering RNAs (siRNAs) that are used to change the action and duration of the natural RNAi process. Another synthetic RNA is the short hairpin RNAs (shRNA) these can also be used to monitor the action and predictability of the RNAi process.
RNAi begins with RNase Dicer cleaving a 21-25 nucleotide long stand of double stranded RNA substrates into small fragments. This process results in the creation of the siRNA duplexes that will be used by the complex RNA induced silencing complex (RISC). The RISC contains the antisense that will bind to complementary mRNA strands, once they are bound they are cleaved by the protein found within the RISC complex called Argonaute 2 (Ago2) between the bases 10 and 11 relative to the 5' end. Before the cleavage of the mRNA strand the double stranded antisense of the siRNA is also cleaved by the Ago2 complex, this leaves a single stranded guide within the RISC compound that will be used to find the desired mRNA strand resulting in this process to have specificity. Some problems that may occur is if the guide single strand siRNA within the RISC complex may become unstable when cleaved and begin to unwind, resulting in binding to an unfavorable mRNA strand. The perfect complementary guides for the targeted RNAs are easily recognized and will be cleaved within the RISC complex; if there is only partial complementary pairing between the guide strand and the targeted mRNA may cause the incorrect translation or destabilization at the target sites.
Antisense oligonucleotides
Antisense oligonucleotides (ASOs) are small strand single stranded oligodeoxynucleotides approximately 15-20 nucleic acids in length that can alter the expression of a protein. The goal of using these antisense oligonucleotides are the decrease in protein expression of a specific target usually by the inhibition of the RNase H endonuclease, as well as inhibition of the 5' cap formation or alteration of the splicing process. In the native state ASOs are rapidly digested, this requires the use of phosphorylation order for the ASO to go through the cell membranes.
Despite the obvious benefits that antisense therapeutics can bring to the world with their ability to silence neural disease, there are many issues with the development of this therapy. One problem is the ASOs are highly susceptible to degradation by the nucleases within the body. This results in a high amount of chemical modification when altering the chemistry to allow for the nucleases to surpass the degradation of these synthetic nucleic acids. Native ASOs have a very short half-life even before being filtered throughout the body especially in the kidney and with the a high negative charge makes the crossing through the vascular system or membranes very difficult when trying to reach the targeted DNA or mRNA strands. With all these barriers, the chemical modifications may lead to devastating effects when being introduced into the body making each problem develop more and more side effects.
The synthetic oligonucleotides are negatively charged molecules that are chemically modified in order for the molecule to regulate the gene expression within the cell. Some issues that come about this process is the toxicity and variability that can come about with chemical modification. The goal of the ASO is to modulate the gene expression through proteins which can be done in 2 complex ways; a)the RNase H-dependent oligonucleotides, which induce the degradation of mRNA, and (b) the steric-blocker oligonucleotides, which physically prevent or inhibit the progression of splicing or the translational machinery. The majority of investigated ASOs utilize the first mechanism with the Rnase H enzyme that hydrolyzes an RNA strand, when this enzyme is assisted using the oligonucleotides the reduction of RNA expression is efficiently reduced by 80-95% and can still inhibit expression on any region of the mRNA.
References
Genetics | Trinucleotide repeat expansion | [
"Biology"
] | 6,466 | [
"Genetics"
] |
13,666,834 | https://en.wikipedia.org/wiki/Hybrid%20Insect%20Micro-Electro-Mechanical%20Systems | Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) is a project of DARPA, a unit of the United States Department of Defense. Created in 2006, the unit's goal is the creation of tightly coupled machine-insect interfaces by placing micro-mechanical systems inside the insects during the early stages of metamorphosis. After implantation, the "insect cyborgs" could be controlled by sending electrical impulses to their muscles. The primary application is surveillance. The project was created with the ultimate goal of delivering an insect within 5 meters of a target located 100 meters away from its starting point. In 2008, a team from the University of Michigan demonstrated a cyborg unicorn beetle at an academic conference in Tucson, Arizona. The beetle was able to take off and land, turn left or right, and demonstrate other flight behaviors. Researchers at Cornell University demonstrated the successful implantation of electronic probes into tobacco hornworms in the pupal stage.
References
Microtechnology
DARPA
Research projects
Surveillance
Cyborgs
Micro air vehicles | Hybrid Insect Micro-Electro-Mechanical Systems | [
"Materials_science",
"Engineering",
"Biology"
] | 212 | [
"Materials science",
"Microtechnology",
"Cyborgs"
] |
13,671,479 | https://en.wikipedia.org/wiki/Viaspan | Viaspan was the trademark under which the University of Wisconsin cold storage solution (also known as University of Wisconsin solution or UW solution) was sold. Currently, UW solution is sold under the Belzer UW trademark and others like Bel-Gen or StoreProtect. UW solution was the first solution designed for use in organ transplantation, and became the first intracellular-like preservation medium. Developed in the late 1980s by Folkert Belzer and James Southard for pancreas preservation, the solution soon displaced EuroCollins solution as the preferred medium for cold storage of livers and kidneys, as well as pancreas. The solution has also been used for hearts and other organs. University of Wisconsin cold storage solution remains what is often called the gold standard for organ preservation, despite the development of other solutions that are in some respects superior.
Development
The guiding principles for the development of UW Solution were:
osmotic concentration maintained by the use of metabolically inert substances like lactobionate and raffinose rather than with glucose
Hydroxyethyl starch (HES) is used to prevent edema
Substances are added to scavenge free radicals, along with steroids and insulin.
Composition
Potassium lactobionate: 100 mM
KH2PO4: 25 mM
MgSO4: 5 mM
Raffinose: 30 mM
Adenosine: 5 mM
Glutathione: 3 mM
Allopurinol: 1 mM
Hydroxyethyl starch: 50 g/L
See also
HTK Solution (Histidine-tryptophan-ketoglutarate)
Biostasis
Organ transplant
References
Cryobiology
Transplantation medicine | Viaspan | [
"Physics",
"Chemistry",
"Biology"
] | 346 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
13,674,069 | https://en.wikipedia.org/wiki/Hippocratic%20Oath%20for%20scientists | A Hippocratic Oath for scientists is an oath similar to the Hippocratic Oath for medical professionals, adapted for scientists. Multiple varieties of such an oath have been proposed. Joseph Rotblat has suggested that an oath would help make new scientists aware of their social and moral responsibilities; opponents, however, have pointed to the "very serious risks for the scientific community" posed by an oath, particularly the possibility that it might be used to shut down certain avenues of research, such as stem cells.
Development
The idea of an oath has been proposed by various prominent members of the scientific community, including Karl Popper, Joseph Rotblat and John Sulston. Research by the American Association for the Advancement of Science (AAAS) identified sixteen different oaths for scientists or engineers proposed during the 20th century, most after 1970.
Popper, Rotblat and Sulston were all primarily concerned with the ethical implications of scientific advances, in particular for Popper and Rotblat the development of the atomic bomb, and believed that scientist, like medics, should have an oath that compelled them to "first do no harm". Popper said: "Formerly the pure scientist or the pure scholar had only one responsibility beyond those which everybody has; that is, to search for the truth. … This happy situation belongs to the past." Rotblat similarly stated: "Scientists can no longer claim that their work has nothing to do with the welfare of the individual or with state policies." He also attacked the attitude that the only obligation of a scientist is to make their results known, the use made of these results being the public's business, saying: "This amoral attitude is in my opinion actually immoral, because it eschews personal responsibility for the likely consequences of one's actions." Sulston was more concerned with rising public distrust of scientists and conflicts of interest brought about by the exploitation of research for profit. The stated intention of his oath was "both to require qualified scientists to cause no harm and to be wholly truthful in their public pronouncements, and also to protect them from discrimination by employers who might prefer them to be economical with the truth."
The concept of an oath, rather than a more detailed code of conduct, has been opposed by Ray Spier, Professor of Science and Engineering Ethics at the University of Surrey, UK, who stated that "Oaths are not the way ahead". Other objections raised at a AAAS meeting on the topic in 2000 included that an oath would simply make scientists look good without changing behaviour, that an oath could be used to suppress research, that some scientists would refuse to swear any oath as a matter of principle, that an oath would be ineffective, that creation of knowledge is separate from how it is used, and that the scientific community could never agree on the content of an oath. The meeting concluded that: "There was a broadly shared consensus that a tolerant (but not patronizing) attitude should be taken towards those developing oaths, but that an oath posed very serious risks for the scientific community which could not be ignored." Nobel laureate Jean-Marie Lehn has said "The first aim of scientific research is to increase knowledge for understanding. Knowledge is then available to mankind for use, namely to progress as well as to help prevent disease and suffering. Any knowledge can be misused. I do not see the need for an oath".
Some of the propositions are outlined below.
Karl Popper
In 1968, the philosopher Karl Popper gave a talk on "The Moral Responsibility of the Scientist" at the International Congress on Philosophy in Vienna, in which he suggested "an undertaking analogous to the Hippocratic oath". In his analysis he noted that the original oath had three sections: the apprentice's obligation to their teacher; the obligation to carry on the high tradition of their art, preserve its high standards, and pass these standards on to their own students; and the obligation to help the suffering and preserve their confidentiality. He also noted that it was an apprentice's oath, as distinct from a graduation oath. Based on this, he proposed a three-section oath for students, rearranged from the Hippocratic oath to give professional responsibility to further the growth of knowledge; the student, who owes respect to others engaged in science and loyalty to teachers; and the overriding loyalty owed to humanity as a whole.
Joseph Rotblat
The idea of a Hippocratic Oath for scientists was raised again by Joseph Rotblat in his acceptance speech for the Nobel Peace Prize in 1995, who later expanded on the idea, endorsing the formulation of the Student Pugwash Group:
John Sulston
In 2001, in the scientific journal Biochemical Journal, Nobel laureate John Sulston proposed that "For individual scientists, it may be helpful to have a clear professional code of conduct – a Hippocratic oath as it were". This path would enable scientists to declare their intention "to cause no harm and to be wholly truthful in their public pronouncements", and would also serve to protect them from unethical employers. The concept of an oath was opposed by Ray Spiers of the University of Surrey, an expert on scientific ethics who was preparing a 20-point code of conduct at the time.
David King
In 2007, the UK government's chief scientific advisor, David King, presented a "Universal Ethical Code for Scientists" at the British Association's Festival of Science in York. Despite being a code rather than an oath, this was widely reported as a Hippocratic oath for scientists. In contrast to the earlier oaths, King's code was not only intended to meet the public demand that "scientific developments are ethical and serve the wider public good" but also to address public confidence in the integrity of science, which had been shaken by the disgrace of cloning pioneer Hwang Woo-suk and by other research-fraud scandals.
Work on the code started in 2005, following a meeting of G8 science ministers and advisors. It was supported by the Royal Society in its response to a public consultation on the draft code in 2006, where they said it would help whistleblowers and the promotion of science in schools.
The code has seven principles, divided into three sections:
See also
Code of conduct
Code of ethics
Universal code (ethics)
References
External links
Transcript of a Conversation with Sir David King, 2007;
Institute of Medical Science, Toronto, 2008 ;
Ethics of science and technology
Oaths | Hippocratic Oath for scientists | [
"Technology"
] | 1,316 | [
"Ethics of science and technology"
] |
13,677,392 | https://en.wikipedia.org/wiki/Solar%20Energy%20Materials%20and%20Solar%20Cells | Solar Energy Materials and Solar Cells is a scientific journal published by Elsevier covering research related to solar energy materials and solar cells. According to the Journal Citation Reports, Solar Energy Materials and Solar Cells has a 2020 impact factor of 7.267.
Controversies
A paper titled "Ageing effects of perovskite solar cells under different environmental factors and electrical load conditions" published in 2018 in the journal corresponded to a paper previously published in the journal Nature Energy as "Systematic investigation of the impact of operation conditions on the degradation behaviour of perovskite solar cells".
It led to an investigation of plagiarism.
See also
List of periodicals published by Elsevier
References
External links
Elsevier academic journals
Energy and fuel journals
English-language journals
Materials science journals
Monthly journals
Academic journals established in 1968
Solar energy | Solar Energy Materials and Solar Cells | [
"Materials_science",
"Engineering",
"Environmental_science"
] | 161 | [
"Environmental science journals",
"Energy and fuel journals",
"Materials science journals",
"Materials science"
] |
1,057,601 | https://en.wikipedia.org/wiki/Quantum%20field%20theory%20in%20curved%20spacetime | In theoretical physics, quantum field theory in curved spacetime (QFTCS) is an extension of quantum field theory from Minkowski spacetime to a general curved spacetime. This theory uses a semi-classical approach; it treats spacetime as a fixed, classical background, while giving a quantum-mechanical description of the matter and energy propagating through that spacetime. A general prediction of this theory is that particles can be created by time-dependent gravitational fields (multigraviton pair production), or by time-independent gravitational fields that contain horizons. The most famous example of the latter is the phenomenon of Hawking radiation emitted by black holes.
Overview
Ordinary quantum field theories, which form the basis of standard model, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime.
For non-zero cosmological constants, on curved spacetimes quantum fields lose their interpretation as asymptotic particles. Only in certain situations, such as in asymptotically flat spacetimes (zero cosmological curvature), can the notion of incoming and outgoing particle be recovered, thus enabling one to define an S-matrix. Even then, as in flat spacetime, the asymptotic particle interpretation depends on the observer (i.e., different observers may measure different numbers of asymptotic particles on a given spacetime).
Another observation is that unless the background metric tensor has a global timelike Killing vector, there is no way to define a vacuum or ground state canonically. The concept of a vacuum is not invariant under diffeomorphisms. This is because a mode decomposition of a field into positive and negative frequency modes is not invariant under diffeomorphisms. If (t) is a diffeomorphism, in general, the Fourier transform of exp[(t)] will contain negative frequencies even if k > 0. Creation operators correspond to positive frequencies, while annihilation operators correspond to negative frequencies. This is why a state which looks like a vacuum to one observer cannot look like a vacuum state to another observer; it could even appear as a heat bath under suitable hypotheses.
Since the end of the 1980s, the local quantum field theory approach due to Rudolf Haag and Daniel Kastler has been implemented in order to include an algebraic version of quantum field theory in curved spacetime. Indeed, the viewpoint of local quantum physics is suitable to generalize the renormalization procedure to the theory of quantum fields developed on curved backgrounds. Several rigorous results concerning QFT in the presence of a black hole have been obtained. In particular the algebraic approach allows one to deal with the problems mentioned above arising from the absence of a preferred reference vacuum state, the absence of a natural notion of particle and the appearance of unitarily inequivalent representations of the algebra of observables.
Applications
Using perturbation theory in quantum field theory in curved spacetime geometry is known as the semiclassical approach to quantum gravity. This approach studies the interaction of quantum fields in a fixed classical spacetime and among other thing predicts the creation of particles by time-varying spacetimes and Hawking radiation. The latter can be understood as a manifestation of the Unruh effect where an accelerating observer observes black body radiation. Other prediction of quantum fields in curved spaces include, for example, the radiation emitted by a particle moving along a geodesic and the interaction of Hawking radiation with particles outside black holes.
This formalism is also used to predict the primordial density perturbation spectrum arising in different models of cosmic inflation. These predictions are calculated using the Bunch–Davies vacuum or modifications thereto.
Approximation to quantum gravity
The theory of quantum field theory in curved spacetime may be considered as an intermediate step towards quantum gravity. QFT in curved spacetime is expected to be a viable approximation to the theory of quantum gravity when spacetime curvature is not significant on the Planck scale. However, the fact that the true theory of quantum gravity remains unknown means that the precise criteria for when QFT on curved spacetime is a good approximation are also unknown.
Gravity is not renormalizable in QFT, so merely formulating QFT in curved spacetime is not a true theory of quantum gravity.
See also
General relativity
History of quantum field theory
Local quantum field theory
Statistical field theory
Topological quantum field theory
Quantum geometry
Quantum spacetime
References
Further reading
External links
Summary Chart of Intro Steps to Quantum Fields in Curved Spacetime A two-page chart outline of the basic principles governing the behavior of quantum fields in general relativity.
Quantum field theory
Quantum gravity | Quantum field theory in curved spacetime | [
"Physics"
] | 1,031 | [
"Quantum field theory",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Physics beyond the Standard Model"
] |
1,058,299 | https://en.wikipedia.org/wiki/Particle%20image%20velocimetry | Particle image velocimetry (PIV) is an optical method of flow visualization used in education and research. It is used to obtain instantaneous velocity measurements and related properties in fluids. The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics (the degree to which the particles faithfully follow the flow is represented by the Stokes number). The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate speed and direction (the velocity field) of the flow being studied.
Other techniques used to measure flows are laser Doppler velocimetry and hot-wire anemometry. The main difference between PIV and those techniques is that PIV produces two-dimensional or even three-dimensional vector fields, while the other techniques measure the velocity at a point. During PIV, the particle concentration is such that it is possible to identify individual particles in an image, but not with certainty to track it between images. When the particle concentration is so low that it is possible to follow an individual particle it is called particle tracking velocimetry, while laser speckle velocimetry is used for cases where the particle concentration is so high that it is difficult to observe individual particles in an image.
Typical PIV apparatus consists of a camera (normally a digital camera with a charge-coupled device (CCD) chip in modern systems), a strobe or laser with an optical arrangement to limit the physical region illuminated (normally a cylindrical lens to convert a light beam to a line), a synchronizer to act as an external trigger for control of the camera and laser, the seeding particles and the fluid under investigation. A fiber-optic cable or liquid light guide may connect the laser to the lens setup. PIV software is used to post-process the optical images.
History
Particle image velocimetry (PIV) is a non-intrusive optical flow measurement technique used to study fluid flow patterns and velocities. PIV has found widespread applications in various fields of science and engineering, including aerodynamics, combustion, oceanography, and biofluids. The development of PIV can be traced back to the early 20th century when researchers started exploring different methods to visualize and measure fluid flow.
The early days of PIV can be credited to the pioneering work of Ludwig Prandtl, a German physicist and engineer, who is often regarded as the father of modern aerodynamics. In the 1920s, Prandtl and his colleagues used shadowgraph and schlieren techniques to visualize and measure flow patterns in wind tunnels. These methods relied on the refractive index differences between the fluid regions of interest and the surrounding medium to generate contrast in the images. However, these methods were limited to qualitative observations and did not provide quantitative velocity measurements.
The early PIV setups were relatively simple and used photographic film as the image recording medium. A laser was used to illuminate particles, such as oil droplets or smoke, added to the flow, and the resulting particle motion was captured on film. The films were then developed and analyzed to obtain flow velocity information. These early PIV systems had limited spatial resolution and were labor-intensive, but they provided valuable insights into fluid flow behavior.
The advent of lasers in the 1960s revolutionized the field of flow visualization and measurement. Lasers provided a coherent and monochromatic light source that could be easily focused and directed, making them ideal for optical flow diagnostics. In the late 1960s and early 1970s, researchers such as Arthur L. Lavoie, Hervé L. J. H. Scohier, and Adrian Fouriaux independently proposed the concept of particle image velocimetry (PIV). PIV was initially used for studying air flows and measuring wind velocities, but its applications soon extended to other areas of fluid dynamics.
In the 1980s, the development of charge-coupled devices (CCDs) and digital image processing techniques revolutionized PIV. CCD cameras replaced photographic film as the image recording medium, providing higher spatial resolution, faster data acquisition, and real-time processing capabilities. Digital image processing techniques allowed for accurate and automated analysis of the PIV images, greatly reducing the time and effort required for data analysis.
The advent of digital imaging and computer processing capabilities in the 1980s and 1990s revolutionized PIV, leading to the development of advanced PIV techniques, such as multi-frame PIV, stereo-PIV, and time-resolved PIV. These techniques allowed for higher accuracy, higher spatial and temporal resolution, and three-dimensional measurements, expanding the capabilities of PIV and enabling its application in more complex flow systems.
In the following decades, PIV continued to evolve and advance in several key areas. One significant advancement was the use of dual or multiple exposures in PIV, which allowed for the measurement of both instantaneous and time-averaged velocity fields. Dual-exposure PIV (often referred to as "stereo PIV" or "stereo-PIV") uses two cameras to capture two consecutive images with a known time delay, allowing for the measurement of three-component velocity vectors in a plane. This provided a more complete picture of the flow field and enabled the study of complex flows, such as turbulence and vortices.
In the 2000s and beyond, PIV continued to evolve with the development of high-power lasers, high-speed cameras, and advanced image analysis algorithms. These advancements have enabled PIV to be used in extreme conditions, such as high-speed flows, combustion systems, and microscale flows, opening up new frontiers for PIV research. PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, and has been used in emerging fields, such as microscale and nanoscale flows, granular flows, and additive manufacturing.
The advancement of PIV has been driven by the development of new laser sources, cameras, and image analysis techniques. Advances in laser technology have led to the use of high-power lasers, such as Nd:YAG lasers and diode lasers, which provide increased illumination intensity and allow for measurements in more challenging environments, such as high-speed flows and combustion systems. High-speed cameras with improved sensitivity and frame rates have also been developed, enabling the capture of transient flow phenomena with high temporal resolution. Furthermore, advanced image analysis techniques, such as correlation-based algorithms, phase-based methods, and machine learning algorithms, have been developed to enhance the accuracy and efficiency of PIV measurements.
Another major advancement in PIV was the development of digital correlation algorithms for image analysis. These algorithms allowed for more accurate and efficient processing of PIV images, enabling higher spatial resolution and faster data acquisition rates. Various correlation algorithms, such as cross-correlation, Fourier-transform-based correlation, and adaptive correlation, were developed and widely used in PIV research.
PIV has also benefited from the development of computational fluid dynamics (CFD) simulations, which have become powerful tools for predicting and analyzing fluid flow behavior. PIV data can be used to validate and calibrate CFD simulations, and in turn, CFD simulations can provide insights into the interpretation and analysis of PIV data. The combination of experimental PIV measurements and numerical simulations has enabled researchers to gain a deeper understanding of fluid flow phenomena and has led to new discoveries and advancements in various scientific and engineering fields.
In addition to the technical advancements, PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, to provide more comprehensive and multi-parameter flow measurements. For example, combining PIV with thermographic phosphors or laser-induced fluorescence allows for simultaneous measurement of velocity and temperature or concentration fields, providing valuable data for studying heat transfer, mixing, and chemical reactions in fluid flows.
Applications
The historical development of PIV has been driven by the need for accurate and non-intrusive flow measurements in various fields of science and engineering. The early years of PIV were marked by the development of basic PIV techniques, such as two-frame PIV, and the application of PIV in fundamental fluid dynamics research, primarily in academic settings. As PIV gained popularity, researchers started using it in more practical applications, such as aerodynamics, combustion, and oceanography.
As PIV continues to advance and evolve, it is expected to find further applications in a wide range of fields, from fundamental research in fluid dynamics to practical applications in engineering, environmental science, and medicine. The continued development of PIV techniques, including advancements in lasers, cameras, image analysis algorithms, and integration with other measurement techniques, will further enhance its capabilities and broaden its applications.
In aerodynamics, PIV has been used to study the flow over aircraft wings, rotor blades, and other aerodynamic surfaces, providing insights into the flow behavior and aerodynamic performance of these systems.
As PIV gained popularity, it found applications in a wide range of fields beyond aerodynamics, including combustion, oceanography, biofluids, and microscale flows. In combustion research, PIV has been used to study the details of combustion processes, such as flame propagation, ignition, and fuel spray dynamics, providing valuable insights into the complex interactions between fuel and air in combustion systems. In oceanography, PIV has been used to study the motion of water currents, waves, and turbulence, aiding in the understanding of ocean circulation patterns and coastal erosion. In biofluids research, PIV has been applied to study blood flow in arteries and veins, respiratory flow, and the motion of cilia and flagella in microorganisms, providing important information for understanding physiological processes and disease mechanisms.
PIV has also been used in new and emerging fields, such as microscale and nanoscale flows, granular flows, and multiphase flows. Micro-PIV and nano-PIV have been used to study flows in microchannels, nanopores, and biological systems at the microscale and nanoscale, providing insights into the unique behaviors of fluids at these length scales. PIV has been applied to study the motion of particles in granular flows, such as avalanches and landslides, and to investigate multiphase flows, such as bubbly flows and oil-water flows, which are important in environmental and industrial processes. In microscale flows, conventional measurement techniques are challenging to apply due to the small length scales involved. Micro-PIV has been used to study flows in microfluidic devices, such as lab-on-a-chip systems, and to investigate phenomena such as droplet formation, mixing, and cell motion, with applications in drug delivery, biomedical diagnostics, and microscale engineering.
PIV has also found applications in advanced manufacturing processes, such as additive manufacturing, where understanding and optimizing fluid flow behavior is critical for achieving high-quality and high-precision products. PIV has been used to study the flow dynamics of gases, liquids, and powders in additive manufacturing processes, providing insights into the process parameters that affect the quality and properties of the manufactured products.
PIV has also been used in environmental science to study the dispersion of pollutants in air and water, sediment transport in rivers and coastal areas, and the behavior of pollutants in natural and engineered systems. In energy research, PIV has been used to study the flow behavior in wind turbines, hydroelectric power plants, and combustion processes in engines and turbines, aiding in the development of more efficient and environmentally friendly energy systems.
Equipment and apparatus
Seeding particles
The seeding particles are an inherently critical component of the PIV system. Depending on the fluid under investigation, the particles must be able to match the fluid properties reasonably well. Otherwise they will not follow the flow satisfactorily enough for the PIV analysis to be considered accurate. Ideal particles will have the same density as the fluid system being used, and are spherical (these particles are called microspheres). While the actual particle choice is dependent on the nature of the fluid, generally for macro PIV investigations they are glass beads, polystyrene, polyethylene, aluminum flakes or oil droplets (if the fluid under investigation is a gas). Refractive index for the seeding particles should be different from the fluid which they are seeding, so that the laser sheet incident on the fluid flow will reflect off of the particles and be scattered towards the camera.
The particles are typically of a diameter in the order of 10 to 100 micrometers. As for sizing, the particles should be small enough so that response time of the particles to the motion of the fluid is reasonably short to accurately follow the flow, yet large enough to scatter a significant quantity of the incident laser light. For some experiments involving combustion, seeding particle size may be smaller, in the order of 1 micrometer, to avoid the quenching effect that the inert particles may have on flames. Due to the small size of the particles, the particles' motion is dominated by Stokes' drag and settling or rising effects. In a model where particles are modeled as spherical (microspheres) at a very low Reynolds number, the ability of the particles to follow the fluid's flow is inversely proportional to the difference in density between the particles and the fluid, and also inversely proportional to the square of their diameter. The scattered light from the particles is dominated by Mie scattering and so is also proportional to the square of the particles' diameters. Thus the particle size needs to be balanced to scatter enough light to accurately visualize all particles within the laser sheet plane, but small enough to accurately follow the flow.
The seeding mechanism needs to also be designed so as to seed the flow to a sufficient degree without overly disturbing the flow.
Camera
To perform PIV analysis on the flow, two exposures of laser light are required upon the camera from the flow. Originally, with the inability of cameras to capture multiple frames at high speeds, both exposures were captured on the same frame and this single frame was used to determine the flow. A process called autocorrelation was used for this analysis. However, as a result of autocorrelation the direction of the flow becomes unclear, as it is not clear which particle spots are from the first pulse and which are from the second pulse. Faster digital cameras using CCD or CMOS chips were developed since then that can capture two frames at high speed with a few hundred ns difference between the frames. This has allowed each exposure to be isolated on its own frame for more accurate cross-correlation analysis. The limitation of typical cameras is that this fast speed is limited to a pair of shots. This is because each pair of shots must be transferred to the computer before another pair of shots can be taken. Typical cameras can only take a pair of shots at a much slower speed. High speed CCD or CMOS cameras are available but are much more expensive.
Laser and optics
For macro PIV setups, lasers are predominant due to their ability to produce high-power light beams with short pulse durations. This yields short exposure times for each frame. Nd:YAG lasers, commonly used in PIV setups, emit primarily at 1064 nm wavelength and its harmonics (532, 266, etc.) For safety reasons, the laser emission is typically bandpass filtered to isolate the 532 nm harmonics (this is green light, the only harmonic able to be seen by the naked eye). A fiber-optic cable or liquid light guide might be used to direct the laser light to the experimental setup.
The optics consist of a spherical lens and cylindrical lens combination. The cylindrical lens expands the laser into a plane while the spherical lens compresses the plane into a thin sheet. This is critical as the PIV technique cannot generally measure motion normal to the laser sheet and so ideally this is eliminated by maintaining an entirely 2-dimensional laser sheet. The spherical lens cannot compress the laser sheet into an actual 2-dimensional plane. The minimum thickness is on the order of the wavelength of the laser light and occurs at a finite distance from the optics setup (the focal point of the spherical lens). This is the ideal location to place the analysis area of the experiment.
The correct lens for the camera should also be selected to properly focus on and visualize the particles within the investigation area.
Synchronizer
The synchronizer acts as an external trigger for both the camera(s) and the laser. While analogue systems in the form of a photosensor, rotating aperture and a light source have been used in the past, most systems in use today are digital. Controlled by a computer, the synchronizer can dictate the timing of each frame of the CCD camera's sequence in conjunction with the firing of the laser to within 1 ns precision. Thus the time between each pulse of the laser and the placement of the laser shot in reference to the camera's timing can be accurately controlled. Knowledge of this timing is critical as it is needed to determine the velocity of the fluid in the PIV analysis. Stand-alone electronic synchronizers, called digital delay generators, offer variable resolution timing from as low as 250 ps to as high as several ms. With up to eight channels of synchronized timing, they offer the means to control several flash lamps and Q-switches as well as provide for multiple camera exposures.
Analysis
The frames are split into a large number of interrogation areas, or windows. It is then possible to calculate a displacement vector for each window with help of signal processing and autocorrelation or cross-correlation techniques. This is converted to a velocity using the time between laser shots and the physical size of each pixel on the camera. The size of the interrogation window should be chosen to have at least 6 particles per window on average. A visual example of PIV analysis can be seen here.
The synchronizer controls the timing between image exposures and also permits image pairs to be acquired at various times along the flow. For accurate PIV analysis, it is ideal that the region of the flow that is of interest should display an average particle displacement of about 8 pixels. This is a compromise between a longer time spacing which would allow the particles to travel further between frames, making it harder to identify which interrogation window traveled to which point, and a shorter time spacing, which could make it overly difficult to identify any displacement within the flow.
The scattered light from each particle should be in the region of 2 to 4 pixels across on the image. If too large an area is recorded, particle image size drops and peak locking might occur with loss of sub pixel precision. There are methods to overcome the peak locking effect, but they require some additional work.
If there is in house PIV expertise and time to develop a system, even though it is not trivial, it is possible to build a custom PIV system. Research grade PIV systems do, however, have high power lasers and high end camera specifications for being able to take measurements with the broadest spectrum of experiments required in research.
An example of PIV analysis without installation:
PIV is closely related to digital image correlation, an optical displacement measurement technique that uses correlation techniques to study the deformation of solid materials.
Pros and cons
Advantages
The method is, to a large degree, nonintrusive. The added tracers (if they are properly chosen) generally cause negligible distortion of the fluid flow.
Optical measurement avoids the need for Pitot tubes, hotwire anemometers or other intrusive Flow measurement probes. The method is capable of measuring an entire two-dimensional cross section (geometry) of the flow field simultaneously.
High speed data processing allows the generation of large numbers of image pairs which, on a personal computer may be analysed in real time or at a later time, and a high quantity of near-continuous information may be gained.
Sub pixel displacement values allow a high degree of accuracy, since each vector is the statistical average for many particles within a particular tile. Displacement can typically be accurate down to 10% of one pixel on the image plane.
Drawbacks
In some cases the particles will, due to their higher density, not perfectly follow the motion of the fluid (gas/liquid). If experiments are done in water, for instance, it is easily possible to find very cheap particles (e.g. plastic powder with a diameter of ~60 μm) with the same density as water. If the density still does not fit, the density of the fluid can be tuned by increasing/ decreasing its temperature. This leads to slight changes in the Reynolds number, so the fluid velocity or the size of the experimental object has to be changed to account for this.
Particle image velocimetry methods will in general not be able to measure components along the z-axis (towards to/away from the camera). These components might not only be missed, they might also introduce an interference in the data for the x/y-components caused by parallax. These problems do not exist in Stereoscopic PIV, which uses two cameras to measure all three velocity components.
Since the resulting velocity vectors are based on cross-correlating the intensity distributions over small areas of the flow, the resulting velocity field is a spatially averaged representation of the actual velocity field. This obviously has consequences for the accuracy of spatial derivatives of the velocity field, vorticity, and spatial correlation functions that are often derived from PIV velocity fields.
PIV systems used in research often use class IV lasers and high-resolution, high-speed cameras, which bring cost and safety constraints.
More complex PIV setups
Stereoscopic PIV
Stereoscopic PIV utilises two cameras with separate viewing angles to extract the z-axis displacement. Both cameras must be focused on the same spot in the flow and must be properly calibrated to have the same point in focus.
In fundamental fluid mechanics, displacement within a unit time in the X, Y and Z directions are commonly defined by the variables U, V and W. As was previously described, basic PIV extracts the U and V displacements as functions of the in-plane X and Y directions. This enables calculations of the , , and velocity gradients. However, the other 5 terms of the velocity gradient tensor are unable to be found from this information. The stereoscopic PIV analysis also grants the Z-axis displacement component, W, within that plane. Not only does this grant the Z-axis velocity of the fluid at the plane of interest, but two more velocity gradient terms can be determined: and . The velocity gradient components , , and can not be determined.
The velocity gradient components form the tensor:
Dual plane stereoscopic PIV
This is an expansion of stereoscopic PIV by adding a second plane of investigation directly offset from the first one. Four cameras are required for this analysis. The two planes of laser light are created by splitting the laser emission with a beam splitter into two beams. Each beam is then polarized orthogonally with respect to one another. Next, they are transmitted through a set of optics and used to illuminate one of the two planes simultaneously.
The four cameras are paired into groups of two. Each pair focuses on one of the laser sheets in the same manner as single-plane stereoscopic PIV. Each of the four cameras has a polarizing filter designed to only let pass the polarized scattered light from the respective planes of interest. This essentially creates a system by which two separate stereoscopic PIV analysis setups are run simultaneously with only a minimal separation distance between the planes of interest.
This technique allows the determination of the three velocity gradient components single-plane stereoscopic PIV could not calculate: , , and . With this technique, the entire velocity gradient tensor of the fluid at the 2-dimensional plane of interest can be quantified. A difficulty arises in that the laser sheets should be maintained close enough together so as to approximate a two-dimensional plane, yet offset enough that meaningful velocity gradients can be found in the z-direction.
Multi-plane stereoscopic PIV
There are several extensions of the dual-plane stereoscopic PIV idea available. There is an option to create several parallel laser sheets using a set of beamsplitters and quarter-wave plates, providing three or more planes, using a single laser unit and stereoscopic PIV setup, called XPIV.
Micro PIV
With the use of an epifluorescent microscope, microscopic flows can be analyzed. MicroPIV makes use of fluorescing particles that excite at a specific wavelength and emit at another wavelength. Laser light is reflected through a dichroic mirror, travels through an objective lens that focuses on the point of interest, and illuminates a regional volume. The emission from the particles, along with reflected laser light, shines back through the objective, the dichroic mirror and through an emission filter that blocks the laser light. Where PIV draws its 2-dimensional analysis properties from the planar nature of the laser sheet, microPIV utilizes the ability of the objective lens to focus on only one plane at a time, thus creating a 2-dimensional plane of viewable particles.
MicroPIV particles are on the order of several hundred nm in diameter, meaning they are extremely susceptible to Brownian motion. Thus, a special ensemble averaging analysis technique must be utilized for this technique. The cross-correlation of a series of basic PIV analyses are averaged together to determine the actual velocity field. Thus, only steady flows can be investigated. Special preprocessing techniques must also be utilized since the images tend to have a zero-displacement bias from background noise and low signal-noise ratios. Usually, high numerical aperture objectives are also used to capture the maximum emission light possible. Optic choice is also critical for the same reasons.
Holographic PIV
Holographic PIV (HPIV) encompasses a variety of experimental techniques which use the interference of coherent light scattered by a particle and a reference beam to encode information of the amplitude and phase of the scattered light incident on a sensor plane. This encoded information, known as a hologram, can then be used to reconstruct the original intensity field by illuminating the hologram with the original reference beam via optical methods or digital approximations. The intensity field is interrogated using 3-D cross-correlation techniques to yield a velocity field.
Off-axis HPIV uses separate beams to provide the object and reference waves. This setup is used to avoid speckle noise form being generated from interference of the two waves within the scattering medium, which would occur if they were both propagated through the medium. An off-axis experiment is a highly complex optical system comprising numerous optical elements, and the reader is referred to an example schematic in Sheng et al. for a more complete presentation.
In-line holography is another approach that provides some unique advantages for particle imaging. Perhaps the largest of these is the use of forward scattered light, which is orders of magnitude brighter than scattering oriented normal to the beam direction. Additionally, the optical setup of such systems is much simpler because the residual light does not need to be separated and recombined at a different location. The in-line configuration also provides a relatively easy extension to apply CCD sensors, creating a separate class of experiments known as digital in-line holography. The complexity of such setups shifts from the optical setup to image post-processing, which involves the use of simulated reference beams. Further discussion of these topics is beyond the scope of this article and is treated in Arroyo and Hinsch
A variety of issues degrade the quality of HPIV results. The first class of issues involves the reconstruction itself. In holography, the object wave of a particle is typically assumed to be spherical; however, due to Mie scattering theory, this wave is a complex shape which can distort the reconstructed particle. Another issue is the presence of substantial speckle noise which lowers the overall signal-to-noise ratio of particle images. This effect is of greater concern for in-line holographic systems because the reference beam is propagated through the volume along with the scattered object beam. Noise can also be introduced through impurities in the scattering medium, such as temperature variations and window blemishes. Because holography requires coherent imaging, these effects are much more severe than traditional imaging conditions. The combination of these factors increases the complexity of the correlation process. In particular, the speckle noise in an HPIV recording often prevents traditional image-based correlation methods from being used. Instead, single particle identification and correlation are implemented, which set limits on particle number density. A more comprehensive outline of these error sources is given in Meng et al.
In light of these issues, it may seem that HPIV is too complicated and error-prone to be used for flow measurements. However, many impressive results have been obtained with all holographic approaches. Svizher and Cohen used a hybrid HPIV system to study the physics of hairpin vortices. Tao et al. investigated the alignment of vorticity and strain rate tensors in high Reynolds number turbulence. As a final example, Sheng et al. used holographic microscopy to perform near-wall measurements of turbulent shear stress and velocity in turbulent boundary layers.
Scanning PIV
By using a rotating mirror, a high-speed camera and correcting for geometric changes, PIV can be performed nearly instantly on a set of planes throughout the flow field. Fluid properties between the planes can then be interpolated. Thus, a quasi-volumetric analysis can be performed on a target volume. Scanning PIV can be performed in conjunction with the other 2-dimensional PIV methods described to approximate a 3-dimensional volumetric analysis.
Tomographic PIV
Tomographic PIV is based on the illumination, recording, and reconstruction of tracer particles within a 3-D measurement volume. The technique uses several cameras to record simultaneous views of the illuminated volume, which is then reconstructed to yield a discretized 3-D intensity field. A pair of intensity fields are analyzed using 3-D cross-correlation algorithms to calculate the 3-D, 3-C velocity field within the volume. The technique was originally developed
by Elsinga et al. in 2006.
The reconstruction procedure is a complex under-determined inverse problem. The primary complication is that a single set of views can result from a large number of 3-D volumes. Procedures to properly determine the unique volume from a set of views are the foundation for the field of tomography. In most Tomo-PIV experiments, the multiplicative algebraic reconstruction technique (MART) is used. The advantage of this pixel-by-pixel reconstruction technique is that it avoids the need to identify individual particles. Reconstructing the discretized 3-D intensity field is computationally intensive and, beyond MART, several developments have sought to significantly reduce this computational expense, for example the multiple line-of-sight simultaneous multiplicative algebraic reconstruction technique (MLOS-SMART)
which takes advantage of the sparsity of the 3-D intensity field to reduce memory storage and calculation requirements.
As a rule of thumb, at least four cameras are needed for acceptable reconstruction accuracy, and best results are obtained when the cameras are placed at approximately 30 degrees normal to the measurement volume. Many additional factors are necessary to consider for a successful experiment.
Tomo-PIV has been applied to a broad range of flows. Examples include the structure of a turbulent boundary layer/shock wave interaction, the vorticity of a cylinder wake or pitching airfoil,
rod-airfoil aeroacoustic experiments, and to measure small-scale, micro flows. More recently, Tomo-PIV has been used together with 3-D particle tracking velocimetry to understand predator-prey interactions, and portable version of Tomo-PIV has been used to study unique swimming organisms in Antarctica.
Thermographic PIV
Thermographic PIV is based on the use of thermographic phosphors as seeding particles. The use of these thermographic phosphors permits simultaneous measurement of velocity and temperature in a flow.
Thermographic phosphors consist of ceramic host materials doped with rare-earth or transition metal ions, which exhibit phosphorescence when they are illuminated with UV-light. The decay time and the spectra of this phosphorescence are temperature sensitive and offer two different methods to measure temperature. The decay time method consists on the fitting of the phosphorescence decay to an exponential function and is normally used in point measurements, although it has been demonstrated in surface measurements. The intensity ratio between two different spectral lines of the phosphorescence emission, tracked using spectral filters, is also temperature-dependent and can be employed for surface measurements.
The micrometre-sized phosphor particles used in thermographic PIV are seeded into the flow as a tracer and, after illumination with a thin laser light sheet, the temperature of the particles can be measured from the phosphorescence, normally using an intensity ratio technique. It is important that the particles are of small size so that not only they follow the flow satisfactorily but also they rapidly assume its temperature. For a diameter of 2 μm, the thermal slip between particle and gas is as small as the velocity slip.
Illumination of the phosphor is achieved using UV light. Most thermographic phosphors absorb light in a broad band in the UV and therefore can be excited using a YAG:Nd laser. Theoretically, the same light can be used both for PIV and temperature measurements, but this would mean that UV-sensitive cameras are needed. In practice, two different beams originated in separate lasers are overlapped. While one of the beams is used for velocity measurements, the other is used to measure the temperature.
The use of thermographic phosphors offers some advantageous features including ability to survive in reactive and high temperature environments, chemical stability and insensitivity of their phosphorescence emission to pressure and gas composition. In addition, thermographic phosphors emit light at different wavelengths, allowing spectral discrimination against excitation light and background.
Thermographic PIV has been demonstrated for time averaged
and single shot
measurements. Recently, also time-resolved high speed (3 kHz) measurements
have been successfully performed.
Artificial Intelligence PIV
With the development of artificial intelligence, there have been scientific publications and commercial software proposing PIV calculations based on deep learning and convolutional neural networks. The methodology used stems mainly from optical flow neural networks popular in machine vision. A data set that includes particle images is generated to train the parameters of the networks. The result is a deep neural network for PIV which can provide estimation of dense motion, down to a maximum of one vector for one pixel if the recorded images allow. AI PIV promises a dense velocity field, not limited by the size of the interrogation window, which limits traditional PIV to one vector per 16 x 16 pixels.
Real time processing and applications of PIV
With the advance of digital technologies, real time processing and applications of PIV became possible. For instance, GPUs can be used to speed up substantially the direct of Fourier transform based correlations of single interrogation windows. Similarly multi-processing, parallel or multi-threading processes on several CPUs or multi-core CPUs are beneficial for the distributed processing of multiple interrogation windows or multiple images. Some of the applications use real time image processing methods, such as FPGA based on-the-fly image compression or image processing. More recently, the PIV real time measurement and processing capabilities are implemented for the future use in active flow control with the flow based feedback.
Applications
PIV has been applied to a wide range of flow problems, varying from the flow over an aircraft wing in a wind tunnel to vortex formation in prosthetic heart valves. 3-dimensional techniques have been sought to analyze turbulent flow and jets.
Rudimentary PIV algorithms based on cross-correlation can be implemented in a matter of hours, while more sophisticated algorithms may require a significant investment of time. Several open source implementations are available. Application of PIV in the US education system has been limited due to high price and safety concerns of industrial research grade PIV systems.
Granular PIV: velocity measurement in granular flows and avalanches
PIV can also be used to measure the velocity field of the free surface and basal boundary in a granular flows such as those in shaken containers, tumblers and avalanches.
This analysis is particularly well-suited for nontransparent media such as sand, gravel, quartz, or other granular materials that are common in geophysics. This PIV approach is called "granular PIV". The set-up for granular PIV differs from the usual PIV setup in that the optical surface structure which is produced by illumination of the surface of the granular flow is already sufficient to detect the motion. This means one does not need to add tracer particles in the bulk material.
See also
Digital image correlation
Hot-wire anemometry
Laser Doppler velocimetry
Molecular tagging velocimetry
Particle tracking velocimetry
Notes
References
Katz, J.; Sheng, J. (2010). "Applications of Holography in Fluid Mechanics and Particle Dynamics". Annual Review of Fluid Mechanics. 42: 531-555. Bibcode: doi:10.1146/annurev-fluid-121108-145508.
Bibliography
External links
PIV research at the Laboratory for Experimental Fluid Dynamics (J. Katz lab)
Measurement
Fluid dynamics | Particle image velocimetry | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 7,744 | [
"Physical quantities",
"Chemical engineering",
"Quantity",
"Measurement",
"Size",
"Piping",
"Fluid dynamics"
] |
1,058,719 | https://en.wikipedia.org/wiki/Harmonic%20spectrum | A harmonic spectrum is a spectrum containing only frequency components whose frequencies are whole number multiples of the fundamental frequency; such frequencies are known as harmonics. "The individual partials are not heard separately but are blended together by the ear into a single tone."
In other words, if is the fundamental frequency, then a harmonic spectrum has the form
A standard result of Fourier analysis is that a function has a harmonic spectrum if and only if it is periodic.
See also
Fourier series
Harmonic series (music)
Periodic function
Scale of harmonics
Undertone series
References
Functional analysis
Acoustics
Sound | Harmonic spectrum | [
"Physics",
"Mathematics"
] | 117 | [
"Functions and mappings",
"Mathematical analysis",
"Functional analysis",
"Mathematical analysis stubs",
"Mathematical objects",
"Classical mechanics",
"Acoustics",
"Mathematical relations"
] |
1,060,554 | https://en.wikipedia.org/wiki/F%C4%83g%C4%83ra%C8%99 | Făgăraș (; , ) is a city in central Romania, located in Brașov County. It lies on the Olt River and has a population of 26,284 as of 2021. It is situated in the historical region of Transylvania, and is the main city of a subregion, Țara Făgărașului.
Geography
The city is located at the foothills of the Făgăraș Mountains, on their northern side. It is traversed by the DN1 road, west of Brașov and east of Sibiu. On the east side of the city, between an abandoned field and a gas station, lies the geographical center of Romania, at .
The Olt River flows east to west on the north side of the city; its left tributary, the Berivoi River, discharges into the Olt on the west side of the city, after receiving the waters of the Racovița River. The Berivoi and the Racovița were used to bring water to a since-closed major chemical plant located on the outskirts of the city.
The small part of the city that lies north of the Olt is known as Galați. A former village first recorded in 1396, it was incorporated into Făgăraș in 1952.
Name
One explanation is that the name was given by the Pechenegs, who called the nearby river Fagar šu (Fogaras/Făgăraș), which in the Pecheneg language means ash(tree) water.
According to linguist Iorgu Iordan, the name of the town is a Romanian diminutive of a hypothetical collective noun *făgar ("beech forest"), presumably derived from fag, "beech tree". Hungarian linguist István Kniezsa deemed this idea unlikely.
Another interpretation is that the name derives from the Hungarian word fogoly (partridge).
There has also been speculation that the name can be explained by folk etymology, as the rendering of the words fa ("wooden") and garas ("mite") in Hungarian. Legends state that money made out of wood had been used to pay the peasants who built the Făgăraș Citadel, an important fortress near the border of the Kingdom of Hungary, around 1310. This view is in harmony with an idea advanced by Iorgu Iordan, who suggested a diminutive derivation from *făgar, found elsewhere in Romania as well.
History
Făgăraș, together with Amlaș, constituted during the Middle Ages a traditional Romanian local-autonomy region in Transylvania. The first written Hungarian document mentioning Romanians in Transylvania referred to Vlach lands ("Terra Blacorum") in the Făgăraș Region in 1222. (In this document, Andrew II of Hungary gave Burzenland and the Cuman territories South of Burzenland up to the Danube to the Teutonic Knights.) After the Tatar invasion in 1241–1242, Saxons settled in the area. In 1369, Louis I of Hungary gave the Royal Estates of Făgăraș to his vassal, Vladislav I of Wallachia. As in other similar cases in medieval Europe (such as Foix, Pokuttya, or Dauphiné), the local feudal had to swear oath of allegiance to the king for the specific territory, even when the former was himself an independent ruler of another state. Therefore, the region became the feudal property of the princes of Wallachia, but remained within the Kingdom of Hungary. The territory remained in the possession of Wallachian princes until 1464.
Except for this period of Wallachian rule, the town itself was centre of the surrounding royal estates. During the rule of Transylvanian Prince Gabriel Bethlen (1613–1629), the city became an economic role model city in the southern regions of the realm. Bethlen rebuilt the fortress entirely.
Ever since that time, Făgăraș was the residence of the wives of Transylvanian Princes, as an equivalent of Veszprém, the Hungarian "city of queens". Of these, Zsuzsanna Lorántffy, the widow of George I Rákóczy established a Romanian school here in 1658. Probably the most prominent of the princesses residing in the town was the orphan Princess Kata Bethlen (1700–1759), buried in front of the Reformed church. The church holds several precious relics of her life. Her bridal gown, with the family coat of arms embroidered on it, and her bridal veil now covers the altar table. Both are made of yellow silk.
Făgăraș was the site of several Transylvanian Diets, mostly during the reign of Michael I Apafi. The church was built around 1715–1740. Not far from it is the Radu Negru National College, built in 1907-1909. Until 1919, it was a Hungarian-language gymnasium where Mihály Babits taught for a while.
A local legend says that Negru Vodă left the central fortress to travel south past the Transylvanian Alps to become the founder of the Principality of Wallachia, although Basarab I is traditionally known as the 14th century founder of the state. By the end of the 12th century the fortress itself was made of wood, but it was reinforced in the 14th century and became a stone fortification.
In 1850 the inhabitants of the town were 3,930, of which 1,236 were Germans, 1,129 Romanians, 944 Hungarians, 391 Roma, 183 Jews, and 47 of other ethnicities, meanwhile in 1910, the town had 6,579 inhabitants with the following proportion: 3,357 Hungarians, 2,174 Romanians, and 1,003 Germans. According to the 2011 census, the city of Făgăraș had 30,714 residents; of those for whom data was available, 91.7% were Romanians, 3.8% Roma, 3.7% Hungarians, and 0.7% German. At the 2021 census, the city had a population of 26,284, of which 73.29% were Romanians, 8.58% Roma, and 2.24% Hungarians.
Făgăraș's castle was used as a stronghold by the Communist regime. During the 1950s it was a prison for opponents and dissidents. After the fall of the regime in 1989, the castle was restored and is currently used as a museum and library.
The city's economy was badly shaken by the disappearance of most of its industries following the 1989 Revolution and the ensuing hardships and reforms. Some of the city's population left as guest workers to Italy, Spain, or Ireland.
Jewish history
A Jewish community was established in 1827, becoming among southern Transylvania’s largest by mid-century. Yehuda Silbermann, its first rabbi (1855–1863), kept a diary of communal events. This is still extant and serves as a source on the history of Transylvanian Jewry. In 1869, the local community joined the Neolog association, switching to an Orthodox stance in 1926. A Jewish school opened in the 1860s.
There were 286 Jews in 1856, rising to 388 by 1930, or just under 5% of the population. During World War II, local Germans as well as the Iron Guard attacked Jews and plundered their property. Sixty Jews were sent to forced labor. After the 1944 Romanian coup d'état rescinded anti-Semitic laws, many left for larger cities or emigrated to Palestine. The last Jew of Făgăraș died in 2013.
Climate
Făgăraș has a humid continental climate (Cfb in the Köppen climate classification).
Administration
The political composition of the town council after the 2020 Romanian local elections is the following one:
Personalities
Radu Negru (Negru-Vodă), legendary ruler of Wallachia (1290–1300).
Gabriel Bethlen (1580–1629), Prince of Transylvania between 1613–1629.
Inocențiu Micu-Klein, (1692–1768), bishop of Alba Iulia and Făgăraș (1728–1751) and Primate of the Romanian Greek-Catholic Church, had his episcopal residence in Făgăraș between 1732–1737.
Ioan Pușcariu, captain of Făgăraș.
Aron Pumnul (1818–1866) scholar, linguist, philologist, literary historian, teacher of Mihai Eminescu, leader of the Revolution of 1848 in Transylvania.
Nicolae Densușianu (1846–1911), historian, Associate member of the Romanian Academy.
Aron Densușianu (1837–1900), poet and literary critic, Associate Member of the Romanian Academy.
Badea Cârțan (Gheorghe Cârțan) (1848–1911), fighting for the independence of the Romanians in Transylvania.
Ovid Densusianu (1873–1938), Aron Densușianu's son, philologist, linguist, folklorist, poet and academician, professor at the University of Bucharest.
Johanna Korner who founded the Madame Korner cosmetic business in Australia was born here in 1891.
Ștefan Câlția, painter (born in Brașov in 1942).
Ion Gavrilă Ogoranu (1923–2006) member of the fascist paramilitary organization the Iron Guard, in the group of the Făgăraș Mountains, former student of the present Radu Negru National College, class of 1945.
Octavian Paler (1926–2007), writer and publicist, former student of the present Radu Negru National College, class of 1945.
Laurențiu (Liviu) Streza (born in 1947), Orthodox archbishop and metropolitan of Transylvania, former student of the present Radu Negru National College, class of 1965.
Horia Sima (1906–1993), Co-Conducător of Romania in 1940–1941, and second leader of the Iron Guard. Former student of the present Radu Negru National College, class of 1926.
Mircea Frățică (born in 1957) Judoka who won the European title in 1982, and bronze medals at the 1980 European Championships, 1983 World Championships and 1984 Olympics (Romania's first Olympic judo medalist).
Nicușor Dan (born in 1969), mathematician, activist, and politician.
Mihail Neamțu (born 1978), writer and politician.
Mircea Dincă (born 1980), chemist.
See also
Făgăraș Mountains
List of castles in Romania
Tourism in Romania
Villages with fortified churches in Transylvania
References
External links
Populated places in Brașov County
Cities in Romania
Localities in Transylvania
Monotowns in Romania
Capitals of former Romanian counties
Geographical centres | Făgăraș | [
"Physics",
"Mathematics"
] | 2,176 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
1,060,624 | https://en.wikipedia.org/wiki/Newton%27s%20rings | Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two surfaces, typically a spherical surface and an adjacent touching flat surface. It is named after Isaac Newton, who investigated the effect in 1666. When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric ring pattern of rainbow colors because the different wavelengths of light interfere at different thicknesses of the air layer between the surfaces.
History
The phenomenon was first described by Robert Hooke in his 1665 book Micrographia. Its name derives from the mathematician and physicist Sir Isaac Newton, who studied the phenomenon in 1666 while sequestered at home in Lincolnshire in the time of the Great Plague that had shut down Trinity College, Cambridge. He recorded his observations in an essay entitled "Of Colours". The phenomenon became a source of dispute between Newton, who favored a corpuscular nature of light, and Hooke, who favored a wave-like nature of light. Newton did not publish his analysis until after Hooke's death, as part of his treatise "Opticks" published in 1704.
Theory
The pattern is created by placing a very slightly convex curved glass on an optical flat glass. The two pieces of glass make contact only at the center. At other points there is a slight air gap between the two surfaces, increasing with radial distance from the center.
Consider monochromatic (single color) light incident from the top that reflects from both the bottom surface of the top lens and the top surface of the optical flat below it. The light passes through the glass lens until it comes to the glass-to-air boundary, where the transmitted light goes from a higher refractive index (n) value to a lower n value. The transmitted light passes through this boundary with no phase change. The reflected light undergoing internal reflection (about 4% of the total) also has no phase change. The light that is transmitted into the air travels a distance, t, before it is reflected at the flat surface below. Reflection at this air-to-glass boundary causes a half-cycle (180°) phase shift because the air has a lower refractive index than the glass. The reflected light at the lower surface returns a distance of (again) t and passes back into the lens. The additional path length is equal to twice the gap between the surfaces. The two reflected rays will interfere according to the total phase change caused by the extra path length 2t and by the half-cycle phase change induced in reflection at the flat surface. When the distance 2t is zero (lens touching optical flat) the waves interfere destructively, hence the central region of the pattern is dark.
A similar analysis for illumination of the device from below instead of from above shows that in this case the central portion of the pattern is bright, not dark. When the light is not monochromatic, the radial position of the fringe pattern has a "rainbow" appearance.
Interference
In areas where the path length difference between the two rays is equal to an odd multiple of half a wavelength (λ/2) of the light waves, the reflected waves will be in phase, so the "troughs" and "peaks" of the waves coincide. Therefore, the waves will reinforce (add) through constructive interference and the resulting reflected light intensity will be greater. As a result, a bright area will be observed there.
At other locations, where the path length difference is equal to an even multiple of a half-wavelength, the reflected waves will be 180° out of phase, so a "trough" of one wave coincides with a "peak" of the other wave. This is destructive interference: the waves will cancel (subtract) and the resulting light intensity will be weaker or zero. As a result, a dark area will be observed there. Because of the 180° phase reversal due to reflection of the bottom ray, the center where the two pieces touch is dark.
This interference results in a pattern of bright and dark lines or bands called "interference fringes" being observed on the surface. These are similar to contour lines on maps, revealing differences in the thickness of the air gap. The gap between the surfaces is constant along a fringe. The path length difference between two adjacent bright or dark fringes is one wavelength λ of the light, so the difference in the gap between the surfaces is one-half wavelength. Since the wavelength of light is so small, this technique can measure very small departures from flatness. For example, the wavelength of red light is about 700 nm, so using red light the difference in height between two fringes is half that, or 350 nm, about the diameter of a human hair. Since the gap between the glasses increases radially from the center, the interference fringes form concentric rings. For glass surfaces that are not axially symmetric, the fringes will not be rings but will have other shapes.
Quantitative Relationships
For illumination from above, with a dark center, the radius of the Nth bright ring is given by
where
N is the bright-ring number, R is the radius of curvature of the glass lens the light is passing through, and λ is the wavelength of the light. The above formula is also applicable for dark rings for the ring pattern obtained by transmitted light.
Given the radial distance of a bright ring, r, and a radius of curvature of the lens, R, the air gap between the glass surfaces, t, is given to a good approximation by
where the effect of viewing the pattern at an angle oblique to the incident rays is ignored.
Thin-film interference
The phenomenon of Newton's rings is explained on the same basis as thin-film interference, including effects such as "rainbows" seen in thin films of oil on water or in soap bubbles. The difference is that here the "thin film" is a thin layer of air.
References
Further reading
External links
Newton's Ring from Eric Weisstein's World of Physics
Explanation of and expression for Newton's rings
Newton-gyűrűk (Newton's rings) Video of a simple experiment with two lenses, and Newton's rings on mica observed. (On the website FizKapu.)
Interference
Optical phenomena | Newton's rings | [
"Physics"
] | 1,291 | [
"Optical phenomena",
"Physical phenomena"
] |
1,060,865 | https://en.wikipedia.org/wiki/Pre-preg | Pre-preg is a composite material made from "pre-impregnated" fibers and a partially cured polymer matrix, such as epoxy or phenolic resin, or even thermoplastic mixed with liquid rubbers or resins. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture. The thermoset matrix is only partially cured to allow easy handling; this B-Stage material requires cold storage to prevent complete curing. B-Stage pre-preg is always stored in cooled areas since heat accelerates complete polymerization. Hence, composite structures built of pre-pregs will mostly require an oven or autoclave to cure. The main idea behind a pre-preg material is the use of anisotropic mechanical properties along the fibers, while the polymer matrix provides filling properties, keeping the fibers in a single system.
Pre-preg allows one to impregnate the fibers on a flat workable surface, or rather in an industrial process, and then later form the impregnated fibers to a shape which could prove to be problematic for the hot injection process. Pre-preg also allows one to impregnate a bulk amount of fiber and then store it in a cooled area (below 20 °C) for an extended period of time to cure later. The process can also be time consuming in comparison to the hot injection process and the added value for pre-preg preparation is at the stage of the material supplier.
Areas of application
This technique can be utilized in the aviation industry. As in principle, prepreg has the potential to be processed batch sizes. Despite fiber glass having high applicability in aircraft specifically small aircraft motors, carbon fiber is employed in this type of industry at a higher rate, and the demand for it is increasing. For example, the characterization of Airbus A380 is handled by means of a mass fraction. This mass fraction is about 20%, and the Airbus A350XWB by a mass fraction of about 50% of carbon fiber prepregs. Carbon fiber prepregs have been used in the airfoils of the Airbus fleet for more than 20 years.
The usage of prepreg in automotive industry is used at relatively limited quantities in comparison with other techniques like automated tape lay-up and automated fiber placement. The main reason behind this is the relative high cost of prepreg fibers as well as the compounds used in molds. Example of such materials are bulk moulding compound (BMC) or sheet moulding compound (SMC).
This material is used to make the cockpit doors on the Airbus A320. This material provides bulletproofness.
Uses of prepregs
There are many products that utilize the concept of prepreg among which is the following.
Motorsport
Space travel
Sports equipment
Sailing
Orthopedic technology in orthotics as well as in prosthetics
In electrical engineering as an "intermediate layer" in multilayer circuit boards and as insulating material for electrical machines and transformers
Rotor blades in wind turbines
Applicable fiber types
There are many fiber types that can be excellent candidates for the preparation of preimpregnated fibers. The most common fibers among these candidates are the following fibers.
Glass fibers
Glass cloth
Basalt fibers
Carbon fibers
Aramid fibers
Matrix
One distinguishes the matrix systems according to their hardening temperature and the type of resin. The curing temperature greatly influences the glass transition temperature and thus the operating temperature. Military aircraft mainly use 180 °C systems
Composition
The prepreg matrix consists of a mixture of resin and hardener, in some cases an accelerator. Freezing at -20 °C prevents the resin from reacting with the hardener. If the cold chain is interrupted, the reaction starts and the prepreg becomes unusable. There are also high-temperature prepregs which can be stored for a certain time at room temperature. These prepregs can then be cured only in an autoclave at elevated temperature.
Resin types
It is mainly used resins based on epoxy resin. Vinyl ester-based prepregs are also available. Since vinyl ester resins must be pre-accelerated with amine accelerator or cobalt, their processing time at room temperature is shorter than with epoxy-based prepregs. Catalysts (also called hardeners) include peroxides such as methyl ethyl ketone peroxide (MEKP), acetyl acetone peroxide (AAP) or cyclohexanone peroxide (CHP). Vinyl ester resin is used under high impact stress.
Resin properties
The properties of the resin and fiber constituents influence the evolution of VBO (vacuum-bag-only) prepreg microstructures during cure. Generally, however, fiber properties and fiber bed architectures are standardized, whereas matrix properties drive both prepreg and process development. The dependence of microstructural evolution on resin properties, therefore, is critical to understand, and has been investigated by numerous authors. The presence of dry prepreg areas may suggest a need for low viscosity resins. However, Ridgard explains that VBO prepreg systems are designed to remain relatively viscous in the early stages of cure to impede infiltration and allow sufficient dry areas to persist for air evacuation to occur. Because the room temperature vacuum holds used to evacuate air from VBO systems are sometimes measured in hours or days, it is critical for the resin viscosity to inhibit cold flow, which could prematurely seal the air evacuation pathways. However, the overall viscosity profile must also permit sufficient flow at cure temperature to fully impregnate the prepreg, lest pervasive dry areas remain in the final part. Furthermore, Boyd and Maskell argue that to inhibit bubble formation and growth at low consolidation pressures, both the viscous and elastic characteristics of the prepreg must be tuned to the specific processing parameters encountered during cure, and ultimately ensure that a majority of the applied pressure is transferred to the resin. Altogether, the rheological evolution of VBO resins must balance the reduction of both voids caused by entrapped gases and voids caused by insufficient flow.
Processing
At room temperatures the resin reacts very slowly and if frozen will remain stable for years. Thus, prepregs can only be cured at high temperatures. They can be processed with the hot pressing technique or the autoclave technique. Through pressure the fiber volume fraction is increased in both techniques.
The best qualities can be produced with the autoclave technique. The combination of pressure and vacuum results in components with very low air inclusions.
The curing can be followed by a tempering process, which serves for complete crosslinking.
Material advances
Recent advances in out of autoclave (OOA) processes hold promise for improving performance and lowering costs for composite structures. Using vacuum-bag-only (VBO) for atmospheric pressures, the new OOA processes promise to deliver less than 1 percent void content required for aerospace primary structures. Led by material scientists at Air Force Research Lab, the technique would save the costs of constructing and installing large structure autoclaves ($100M saved at NASA) and making small production runs of 100 aircraft economically viable.
See also
Composite material
Carbon fiber reinforced polymer
Out of autoclave composite manufacturing
References
Composite materials
Fibre-reinforced polymers | Pre-preg | [
"Physics"
] | 1,505 | [
"Materials",
"Composite materials",
"Matter"
] |
1,060,909 | https://en.wikipedia.org/wiki/Residual%20strength | Residual strength is the load or force (usually mechanical) that a damaged object or material can still carry without failing. Material toughness, fracture size and geometry as well as its orientation all contribute to residual strength.
References
Materials science | Residual strength | [
"Physics",
"Materials_science",
"Engineering"
] | 47 | [
"Applied and interdisciplinary physics",
"Classical mechanics stubs",
"Classical mechanics",
"Materials science",
"nan"
] |
3,139,577 | https://en.wikipedia.org/wiki/Jet%20group | In mathematics, a jet group is a generalization of the general linear group which applies to Taylor polynomials instead of vectors at a point. A jet group is a group of jets that describes how a Taylor polynomial transforms under changes of coordinate systems (or, equivalently, diffeomorphisms).
Overview
The k-th order jet group Gnk consists of jets of smooth diffeomorphisms φ: Rn → Rn such that φ(0)=0.
The following is a more precise definition of the jet group.
Let k ≥ 2. The differential of a function f: Rk → R can be interpreted as a section of the cotangent bundle of RK given by df: Rk → T*Rk. Similarly, derivatives of order up to m are sections of the jet bundle Jm(Rk) = Rk × W, where
Here R* is the dual vector space to R, and Si denotes the i-th symmetric power. A smooth function f: Rk → R has a prolongation jmf: Rk → Jm(Rk) defined at each point p ∈ Rk by placing the i-th partials of f at p in the Si((R*)k) component of W.
Consider a point . There is a unique polynomial fp in k variables and of order m such that p is in the image of jmfp. That is, . The differential data x′ may be transferred to lie over another point y ∈ Rn as jmfp(y) , the partials of fp over y.
Provide Jm(Rn) with a group structure by taking
With this group structure, Jm(Rn) is a Carnot group of class m + 1.
Because of the properties of jets under function composition, Gnk is a Lie group. The jet group is a semidirect product of the general linear group and a connected, simply connected nilpotent Lie group. It is also in fact an algebraic group, since the composition involves only polynomial operations.
Notes
References
Lie groups | Jet group | [
"Mathematics"
] | 429 | [
"Algebra stubs",
"Mathematical structures",
"Lie groups",
"Algebraic structures",
"Algebra"
] |
3,139,692 | https://en.wikipedia.org/wiki/Lamb%E2%80%93Oseen%20vortex | In fluid dynamics, the Lamb–Oseen vortex models a line vortex that decays due to viscosity. This vortex is named after Horace Lamb and Carl Wilhelm Oseen.
Mathematical description
Oseen looked for a solution for the Navier–Stokes equations in cylindrical coordinates with velocity components of the form
where is the circulation of the vortex core. Navier-Stokes equations lead to
which, subject to the conditions that it is regular at and becomes unity as , leads to
where is the kinematic viscosity of the fluid. At , we have a potential vortex with concentrated vorticity at the axis; and this vorticity diffuses away as time passes.
The only non-zero vorticity component is in the direction, given by
The pressure field simply ensures the vortex rotates in the circumferential direction, providing the centripetal force
where ρ is the constant density
Generalized Oseen vortex
The generalized Oseen vortex may be obtained by looking for solutions of the form
that leads to the equation
Self-similar solution exists for the coordinate , provided , where is a constant, in which case . The solution for may be written according to Rott (1958) as
where is an arbitrary constant. For , the classical Lamb–Oseen vortex is recovered. The case corresponds to the axisymmetric stagnation point flow, where is a constant. When , , a Burgers vortex is a obtained. For arbitrary , the solution becomes , where is an arbitrary constant. As , Burgers vortex is recovered.
See also
The Rankine vortex and Kaufmann (Scully) vortex are common simplified approximations for a viscous vortex.
References
Vortices
Equations of fluid dynamics | Lamb–Oseen vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 343 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Fluid dynamics",
"Dynamical systems"
] |
3,139,737 | https://en.wikipedia.org/wiki/Batchelor%20vortex | In fluid dynamics, Batchelor vortices, first described by George Batchelor in a 1964 article, have been found useful in analyses of airplane vortex wake hazard problems.
The model
The Batchelor vortex is an approximate solution to the Navier–Stokes equations obtained using a boundary layer approximation. The physical reasoning behind this approximation is the assumption that the axial gradient of the flow field of interest is of much smaller magnitude than the radial gradient.
The axial, radial and azimuthal velocity components of the vortex are denoted , and respectively and can be represented in cylindrical coordinates as follows:
The parameters in the above equations are
, the free-stream axial velocity,
, the velocity scale (used for nondimensionalization),
, the length scale (used for nondimensionalization),
, a measure of the core size, with initial core size and representing viscosity,
, the swirl strength, given as a ratio between the maximum tangential velocity and the core velocity.
Note that the radial component of the velocity is zero and that the axial and azimuthal components depend only on .
We now write the system above in dimensionless form by scaling time by a factor . Using the same symbols for the dimensionless variables, the Batchelor vortex can be expressed in terms of the dimensionless variables as
where denotes the free stream axial velocity and is the Reynolds number.
If one lets and considers an infinitely large swirl number then the Batchelor vortex simplifies to the Lamb–Oseen vortex for the azimuthal velocity:
where is the circulation.
References
External links
Continuous spectra of the Batchelor vortex (Authored by Xueri Mao and Spencer Sherwin and published by Imperial College London)
Equations of fluid dynamics
Vortices
Fluid dynamics | Batchelor vortex | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 358 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Dynamical systems",
"Chemical engineering",
"Piping",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
3,139,954 | https://en.wikipedia.org/wiki/Kaufmann%20vortex | The Kaufmann vortex, also known as the Scully model, is a mathematical model for a vortex taking account of viscosity. It uses an algebraic velocity profile. This vortex is not a solution of the Navier–Stokes equations.
Kaufmann and Scully's model for the velocity in the Θ direction is:
The model was suggested by W. Kaufmann in 1962, and later by Scully and Sullivan in 1972 at the Massachusetts Institute of Technology.
See also
Rankine vortex – a simpler, but more crude, approximation for a vortex.
Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity.
References
Equations of fluid dynamics
Vortices | Kaufmann vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 139 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Dynamical systems",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
3,140,673 | https://en.wikipedia.org/wiki/Dielectric%20breakdown%20model | Dielectric breakdown model (DBM) is a macroscopic mathematical model combining the diffusion-limited aggregation model with electric field. It was developed by Niemeyer, Pietronero, and Weismann in 1984. It describes the patterns of dielectric breakdown of solids, liquids, and even gases, explaining the formation of the branching, self-similar Lichtenberg figures.
See also
Eden growth model
Lichtenberg figure
Diffusion-limited aggregation
References
External links
Dielectric Breakdown Model
Electricity
Mathematical modeling
Electrical breakdown | Dielectric breakdown model | [
"Physics",
"Mathematics"
] | 106 | [
"Physical phenomena",
"Mathematical modeling",
"Applied mathematics",
"Electrical phenomena",
"Electrical breakdown"
] |
3,140,914 | https://en.wikipedia.org/wiki/Symmetric%20product%20of%20an%20algebraic%20curve | In mathematics, the n-fold symmetric product of an algebraic curve C is the quotient space of the n-fold cartesian product
C × C × ... × C
or Cn by the group action of the symmetric group Sn on n letters permuting the factors. It exists as a smooth algebraic variety denoted by ΣnC. If C is a compact Riemann surface, ΣnC is therefore a complex manifold. Its interest in relation to the classical geometry of curves is that its points correspond to effective divisors on C of degree n, that is, formal sums of points with non-negative integer coefficients.
For C the projective line (say the Riemann sphere ∪ {∞} ≈ S2), its nth symmetric product ΣnC can be identified with complex projective space of dimension n.
If G has genus g ≥ 1 then the ΣnC are closely related to the Jacobian variety J of C. More accurately for n taking values up to g they form a sequence of approximations to J from below: their images in J under addition on J (see theta-divisor) have dimension n and fill up J, with some identifications caused by special divisors.
For g = n we have ΣgC actually birationally equivalent to J; the Jacobian is a blowing down of the symmetric product. That means that at the level of function fields it is possible to construct J by taking linearly disjoint copies of the function field of C, and within their compositum taking the fixed subfield of the symmetric group. This is the source of André Weil's technique of constructing J as an abstract variety from 'birational data'. Other ways of constructing J, for example as a Picard variety, are preferred now but this does mean that for any rational function F on C
F(x1) + ... + F(xg)
makes sense as a rational function on J, for the xi staying away from the poles of F.
For n > g the mapping from ΣnC to J by addition fibers it over J; when n is large enough (around twice g) this becomes a projective space bundle (the Picard bundle). It has been studied in detail, for example by Kempf and Mukai.
Betti numbers and the Euler characteristic of the symmetric product
Let C be a smooth projective curve of genus g over the complex numbers C. The Betti numbers bi(ΣnC) of the symmetric products ΣnC for all n = 0, 1, 2, ... are given by the generating function
and their Euler characteristics e(ΣnC) are given by the generating function
Here we have set u = -1 and y = -p in the previous formula.
Notes
References
Algebraic curves
Symmetric functions | Symmetric product of an algebraic curve | [
"Physics",
"Mathematics"
] | 571 | [
"Algebra",
"Symmetric functions",
"Symmetry"
] |
3,141,360 | https://en.wikipedia.org/wiki/Prime%20decomposition%20of%203-manifolds | In mathematics, the prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) finite collection of prime 3-manifolds.
A manifold is prime if it is not homeomorphic to any connected sum of manifolds, except for the trivial connected sum of the manifold with a sphere of the same dimension, . If is a prime 3-manifold then either it is or the non-orientable bundle over
or it is irreducible, which means that any embedded 2-sphere bounds a ball. So the theorem can be restated to say that there is a unique connected sum decomposition into irreducible 3-manifolds and fiber bundles of over
The prime decomposition holds also for non-orientable 3-manifolds, but the uniqueness statement must be modified slightly. Every compact, non-orientable 3-manifold is a connected sum of irreducible 3-manifolds and non-orientable bundles over This sum is unique as long as we specify that each summand is either irreducible or a non-orientable bundle over
The proof is based on normal surface techniques originated by Hellmuth Kneser. Existence was proven by Kneser, but the exact formulation and proof of the uniqueness was done more than 30 years later by John Milnor.
References
3-manifolds
Manifolds
Theorems in differential geometry | Prime decomposition of 3-manifolds | [
"Mathematics"
] | 292 | [
"Theorems in differential geometry",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Manifolds",
"Theorems in geometry"
] |
3,142,105 | https://en.wikipedia.org/wiki/Bombieri%E2%80%93Lang%20conjecture | In arithmetic geometry, the Bombieri–Lang conjecture is an unsolved problem conjectured by Enrico Bombieri and Serge Lang about the Zariski density of the set of rational points of an algebraic variety of general type.
Statement
The weak Bombieri–Lang conjecture for surfaces states that if is a smooth surface of general type defined over a number field , then the points of do not form a dense set in the Zariski topology on .
The general form of the Bombieri–Lang conjecture states that if is a positive-dimensional algebraic variety of general type defined over a number field , then the points of do not form a dense set in the Zariski topology.
The refined form of the Bombieri–Lang conjecture states that if is an algebraic variety of general type defined over a number field , then there is a dense open subset of such that for all number field extensions over , the set of points in is finite.
History
The Bombieri–Lang conjecture was independently posed by Enrico Bombieri and Serge Lang. In a 1980 lecture at the University of Chicago, Enrico Bombieri posed a problem about the degeneracy of rational points for surfaces of general type. Independently in a series of papers starting in 1971, Serge Lang conjectured a more general relation between the distribution of rational points and algebraic hyperbolicity, formulated in the "refined form" of the Bombieri–Lang conjecture.
Generalizations and implications
The Bombieri–Lang conjecture is an analogue for surfaces of Faltings's theorem, which states that algebraic curves of genus greater than one only have finitely many rational points.
If true, the Bombieri–Lang conjecture would resolve the Erdős–Ulam problem, as it would imply that there do not exist dense subsets of the Euclidean plane all of whose pairwise distances are rational.
In 1997, Lucia Caporaso, Barry Mazur, Joe Harris, and Patricia Pacelli showed that the Bombieri–Lang conjecture implies a uniform boundedness conjecture for rational points: there is a constant depending only on and such that the number of rational points of any genus curve over any degree number field is at most .
References
Diophantine geometry
Unsolved problems in geometry
Conjectures | Bombieri–Lang conjecture | [
"Mathematics"
] | 447 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Conjectures",
"Mathematical problems"
] |
3,142,188 | https://en.wikipedia.org/wiki/I%20band%20%28NATO%29 | The NATO I band is the obsolete designation given to the radio frequencies from 8,000 to 10,000 MHz (equivalent to wavelengths between 3.75 and 3 cm) during the Cold War period. Since 1992, frequency allocations, allotment and assignments are in line with the NATO Joint Civil/Military Frequency Agreement (NJFA).
However, in order to identify military radio spectrum requirements, e.g. for crisis management planning, training, electronic warfare activities, or in military operations, this system is still in use.
References
Radio spectrum
Microwave bands | I band (NATO) | [
"Physics"
] | 114 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
3,143,054 | https://en.wikipedia.org/wiki/Poroelasticity | Poroelasticity is a field in materials science and mechanics that studies the interaction between fluid flow, pressure and bulk solid deformation within a linear porous medium and it is an extension of elasticity and porous medium flow (diffusion equation). The deformation of the medium influences the flow of the fluid and vice versa. The theory was proposed by Maurice Anthony Biot (1935, 1941) as a theoretical extension of soil consolidation models developed to calculate the settlement of structures placed on fluid-saturated porous soils.
The theory of poroelasticity has been widely applied in geomechanics, hydrology, biomechanics, tissue mechanics, cell mechanics, and micromechanics.
An intuitive sense of the response of a saturated elastic porous medium to mechanical loading can be developed by thinking about, or experimenting with, a fluid-saturated sponge. If a fluid-saturated sponge is compressed, fluid will flow from the sponge. If the sponge is in a fluid reservoir and compressive pressure is subsequently removed, the sponge will reimbibe the fluid and expand. The volume of the sponge will also increase if its exterior openings are sealed and the pore fluid pressure is increased. The basic ideas underlying the theory of poroelastic materials are that the pore fluid pressure contributes to the total stress in the porous matrix medium and that the pore fluid pressure alone can strain the porous matrix medium. There is fluid movement in a porous medium due to differences in pore fluid pressure created by different pore volume strains associated with mechanical loading of the porous medium. In unconventional reservoir and source rocks for natural gas like coal and shales, there can be strain due to sorption of gases like methane and carbon dioxide on the porous rock surfaces. Depending on the gas pressure the induced sorption-based strain can be poroelastic or poroinelastic in nature.
Types of Poroelasticity
The theories of poroelasticity can be divided into two categories: static (or quasi-static) and dynamic theories, just like mechanics can be divided into statics and dynamics. The static poroelasticity considers processes in which the fluid movement and solid skeleton deformation occur simultaneously and affect each other. The static poroelasticity is predominant in the literature for poroelasticity; as a result, this term is used interchangeably with poroelasticity in many publications. This static poroelasticity theory is a generalization of the one-dimensional consolidation theory in soil mechanics. This theory was developed from Biot's work in 1941. The dynamic poroelasticity is proposed for understanding the wave propagation in both the liquid and solid phases of saturated porous materials. The inertial and associated kinetic energy, which are not considered in static poroelasticity, are included. This is especially necessary when the speed of the movement of the phases in the porous material is considerable, e.g., when vibration or stress waves is present. The dynamic poroelasticity was developed attributed to Biot's work on the propagation of elastic waves in fluid-saturated media.
Literature
References for the theory of poroelasticity:
See also
Advanced Simulation Library
References
Elasticity (physics)
Porous media | Poroelasticity | [
"Physics",
"Materials_science",
"Engineering"
] | 656 | [
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Porous media",
"Materials science",
"Physical properties"
] |
3,143,285 | https://en.wikipedia.org/wiki/Chemosterilant | A chemosterilant is a chemical compound that causes reproductive sterility in an organism. Chemosterilants are particularly useful in controlling the population of species that are known to cause disease, such as insects, or species that are, in general, economically damaging. The sterility induced by chemosterilants can have temporary or permanent effects. Chemosterilants can be used to target one or both sexes, and it prevents the organism from advancing to be sexually functional. They may be used to control pest populations by sterilizing males. The need for chemosterilants is a direct consequence of the limitations of insecticides. Insecticides are most effective in regions in which there is high vector density in conjunction with endemic transmission, and this may not always be the case. Additionally, the insects themselves will develop a resistance to the insecticide either on the target protein level or through avoidance of the insecticide in what is called a behavioral resistance. If an insect that has been treated with a chemosterilant mates with a fertile insect, no offspring will be produced. The intention is to keep the percent of sterile insects within a population constant, such that with each generation, there will be fewer offspring.
Early research and concerns
Research on chemosterilants began in the 1960s–1970s, but the effort was abandoned due to concerns regarding toxicity. However, with great advancements made in genetics and analysis of vectors, the search for safer chemosterilants has resumed in the 21st century. Initially, there were many concerns with using chemosterilants on an operational scale due to difficulties in finding the ideal small molecule. The molecule used as a chemosterilant must satisfy a certain criteria. Firstly, the molecule must be available at a low cost. The molecule must result in permanent sterility upon exposure through topical application or immersion of larvae into water. Additionally, the survivability of the sterile males must not be affected, and the chemosterilant should not be toxic to humans or the environment. The two promising agents in the beginning were aziridines thiotepa and bisazir, but they were unable to satisfy the criteria of minimal toxicity to humans as well as the vector's predators. Pyriproxyfen was another compound of interest since it is not toxic to humans, but it would not be possible to induce sterility in larvae due to the fact that it exists as a larvicide. Exposure of larvae to pyriproxyfen will essentially kill the larvae.
Examples of chemosterilants
Use of chemosterilants for non-surgical castration (dogs and cats)
There are many regions in which there is a population of cats and dogs that freely roam on the streets. The most conventional approach to controlling reproductive rates in companion animals is through surgical means. However, surgical intervention poses ethical concerns. Through the formulation of a non-surgical castration technique, animals would not have to undergo anesthesia, and would not have to experience post-surgical bleeding or infection of the area that has been operated on. Some examples of chemosterilants include CaCl2 and zinc gluconate. These are specifically known as necrosis-inducing agents, which result in the degeneration of cells in the testes, resulting in infertility. These kinds of chemicals are generally injected into male reproductive organs, such as the testes, vas deferens, or epididymis. When injected, they induce azoospermia, which is the degeneration of the sperm cells normally found in the semen. If no sperm cells are present, reproduction can no longer occur. There is, however, one complication that results from the use of necrosis-inducing agents. Many animals generally exhibit an inflammatory response directly after the injection. To avoid the pain and discomfort associated with necrosis-inducing agents, another form of sterilization, known as apoptosis-inducing agents, has been studied. If cells are signaled to perform apoptosis rather than being eliminated by a foreign substance, this will result in no inflammation in the area. Experiments were tested using mice in vitro and ex vivo that have proved this. Using an apoptosis-inducing agent known as doxorubicin encapsulated in a nanoemulsion, and injecting it into mice, testicular cell death was observed. Inflammation was not observed in this case; however, more research still needs to be conducted with these materials, as the long-term impacts are unknown.
Effect of chemosterilants on the behavior of wandering male dogs in Puerto Natales, Chile
Chemosterilants can be useful to developing countries due to the fact that they have less resources and funds that can be allocated towards castration of their free-roaming animals. Additionally, the culture opposes the removal of testes. This study, performed in 2015, was unable to conclude the effects of chemical sterilization on dog aggression, as not enough is known about the aggression displayed by free-roaming dogs, and thus, researchers were unable to objectively make a decision on this front. Using GPS technology to track the movement of the free-roaming male dogs, it was found that chemical sterilization in comparison to surgical sterilization did not have a significant impact on the range of their roaming around the city. Much more detailed studies need to be performed in this area, since this study was the first of its kind and had relatively short sample sizes along with the examination of behavior not spanning a long enough time period.
Use of CaCl2 and zinc gluconate in cattle
The method of administration of CaCl2 and zinc gluconate is through a transvaginal injection of the chemical into the ovaries, and visualization is achieved through the use of an ultrasound. One group of cattle was only treated with CaCl2, one group was only treated with zinc gluconate, and one group was treated with both CaCl2 and zinc gluconate. Treatment with CaCl2 seems to be most promising, as the ovarian mass of the female cattle upon slaughter was less than cattle treated with zinc gluconate or the combination. The goal of treatment with CaCl2 is to cause ovarian atrophy with a minimal amount of pain.
Ornitrol in controlling the sparrow population
Another chemosterilant found to be effective is known as ornitrol. This chemosterilant was provided to sparrows by impregnating canary seeds, and this was used as a food source for a group of sparrows. There was a control group that was fed canary seeds without the ornitrol, and these birds laid almost twice as many eggs as group that was given ornitrol. It was deemed an effective chemosterilant in the study; however, after the removal of the chemosterilant from the diet, the birds were able to lay viable eggs as soon as 1–2 weeks later.
Commonly used chemosterilants
Two types of chemosterilants are commonly used:
Antimetabolites resemble a substance that the cell or tissue needs that the organism's body mistakes for a true metabolite and tries to incorporate them in its normal building processes. The fit of the chemical is not exactly right and the metabolic process comes to a halt.
Alkylating agents are a group of chemicals that act on chromosomes. These chemicals are extremely reactive, capable of intense cell destruction, damage to chromosomes and production of mutations.
See also
Sterile insect technique
References
Pest control techniques
Chemical compounds | Chemosterilant | [
"Physics",
"Chemistry"
] | 1,566 | [
"Chemical compounds",
"Molecules",
"Matter"
] |
3,143,591 | https://en.wikipedia.org/wiki/Euclid%27s%20theorem | Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proven by Euclid in his work Elements. There are several proofs of the theorem.
Euclid's proof
Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.
Consider any finite list of prime numbers p1, p2, ..., pn. It will be shown that there exists at least one additional prime number not included in this list. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
If q is prime, then there is at least one more prime that is not in the list, namely, q itself.
If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would also divide P (since P is the product of every number in the list). If p divides P and q, then p must also divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be in the list. This means that at least one more prime number exists that is not in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list. In the original work, Euclid denoted the arbitrary finite set of prime numbers as A, B, Γ.
Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on logic, states, "Euclid's proof that there are infinitely many primes is not an indirect proof [...] The argument is sometimes formulated as an indirect proof by replacing it with the assumption 'Suppose are all the primes'. However, since this assumption isn't even used in the proof, the reformulation is pointless."
Variations
Several variations on Euclid's proof exist, including the following:
The factorial n! of a positive integer n is divisible by every integer from 2 to n, as it is the product of all of them. Hence, is not divisible by any of the integers from 2 to n, inclusive (it gives a remainder of 1 when divided by each). Hence is either prime or divisible by a prime larger than n. In either case, for every positive integer n, there is at least one prime bigger than n. The conclusion is that the number of primes is infinite.
Euler's proof
Another proof, by the Swiss mathematician Leonhard Euler, relies on the fundamental theorem of arithmetic: that every integer has a unique prime factorization. What Euler wrote (not with this modern notation and, unlike modern standards, not restricting the arguments in sums and products to any finite sets of integers) is equivalent to the statement that we have
where denotes the set of the first prime numbers, and is the set of the positive integers whose prime factors are all in
To show this, one expands each factor in the product as a geometric series, and distributes the product over the sum (this is a special case of the Euler product formula for the Riemann zeta function).
In the penultimate sum, every product of primes appears exactly once, so the last equality is true by the fundamental theorem of arithmetic. In his first corollary to this result Euler denotes by a symbol similar to the "absolute infinity" and writes that the infinite sum in the statement equals the "value" , to which the infinite product is thus also equal (in modern terminology this is equivalent to saying that the partial sum up to of the harmonic series diverges asymptotically like ). Then in his second corollary, Euler notes that the product
converges to the finite value 2, and there are consequently more primes than squares. This proves Euclid's Theorem.
In the same paper (Theorem 19) Euler in fact used the above equality to prove a much stronger theorem that was unknown before him, namely that the series
is divergent, where denotes the set of all prime numbers (Euler writes that the infinite sum equals , which in modern terminology is equivalent to saying that the partial sum up to of this series behaves asymptotically like ).
Erdős's proof
Paul Erdős gave a proof that also relies on the fundamental theorem of arithmetic. Every positive integer has a unique factorization into a square-free number and a square number . For example, .
Let be a positive integer, and let be the number of primes less than or equal to . Call those primes . Any positive integer which is less than or equal to can then be written in the form
where each is either or . There are ways of forming the square-free part of . And can be at most , so . Thus, at most numbers can be written in this form. In other words,
Or, rearranging, , the number of primes less than or equal to , is greater than or equal to . Since was arbitrary, can be as large as desired by choosing appropriately.
Furstenberg's proof
In the 1950s, Hillel Furstenberg introduced a proof by contradiction using point-set topology.
Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset to be an open set if and only if it is either the empty set, , or it is a union of arithmetic sequences (for ), where
Then a contradiction follows from the property that a finite set of integers cannot be open and the property that the basis sets are both open and closed, since
cannot be closed because its complement is finite, but is closed since it is a finite union of closed sets.
Recent proofs
Proof using the inclusion-exclusion principle
Juan Pablo Pinasco has written the following proof.
Let p1, ..., pN be the smallest N primes. Then by the inclusion–exclusion principle, the number of positive integers less than or equal to x that are divisible by one of those primes is
Dividing by x and letting x → ∞ gives
This can be written as
If no other primes than p1, ..., pN exist, then the expression in (1) is equal to and the expression in (2) is equal to 1, but clearly the expression in (3) is not equal to 1. Therefore, there must be more primes than p1, ..., pN.
Proof using Legendre's formula
In 2010, Junho Peter Whang published the following proof by contradiction. Let k be any positive integer. Then according to Legendre's formula (sometimes attributed to de Polignac)
where
But if only finitely many primes exist, then
(the numerator of the fraction would grow singly exponentially while by Stirling's approximation the denominator grows more quickly than singly exponentially),
contradicting the fact that for each k the numerator is greater than or equal to the denominator.
Proof by construction
Filip Saidak gave the following proof by construction, which does not use reductio ad absurdum or Euclid's lemma (that if a prime p divides ab then it must divide a or b).
Since each natural number greater than 1 has at least one prime factor, and two successive numbers n and (n + 1) have no factor in common, the product n(n + 1) has more different prime factors than the number n itself. So the chain of pronic numbers:1×2 = 2 {2}, 2×3 = 6 {2, 3}, 6×7 = 42 {2, 3, 7}, 42×43 = 1806 {2, 3, 7, 43}, 1806×1807 = 3263442 {2, 3, 7, 43, 13, 139}, · · ·provides a sequence of unlimited growing sets of primes.
Proof using the incompressibility method
Suppose there were only k primes (p1, ..., pk). By the fundamental theorem of arithmetic, any positive integer n could then be represented as
where the non-negative integer exponents ei together with the finite-sized list of primes are enough to reconstruct the number. Since for all i, it follows that for all i (where denotes the base-2 logarithm). This yields an encoding for n of the following size (using big O notation):
bits.
This is a much more efficient encoding than representing n directly in binary, which takes bits. An established result in lossless data compression states that one cannot generally compress N bits of information into fewer than N bits. The representation above violates this by far when n is large enough since . Therefore, the number of primes must not be finite.
Proof using an even-odd argument
Romeo Meštrović used an even-odd argument to show that if the number of primes is not infinite then 3 is the largest prime, a contradiction.
Suppose that are all the prime numbers. Consider and note that by assumption all positive integers relatively prime to it are in the set . In particular, is relatively prime to and so is . However, this means that is an odd number in the set , so , or . This means that must be the largest prime number which is a contradiction.
The above proof continues to work if is replaced by any prime with , the product becomes and even vs. odd argument is replaced with a divisible vs. not divisible by argument. The resulting contradiction is that must, simultaneously, equal and be greater than , which is impossible.
Stronger results
The theorems in this section simultaneously imply Euclid's theorem and other results.
Dirichlet's theorem on arithmetic progressions
Dirichlet's theorem states that for any two positive coprime integers a and d, there are infinitely many primes of the form a + nd, where n is also a positive integer. In other words, there are infinitely many primes that are congruent to a modulo d.
Prime number theorem
Let be the prime-counting function that gives the number of primes less than or equal to , for any real number . The prime number theorem then states that is a good approximation to , in the sense that the limit of the quotient of the two functions and as increases without bound is 1:
Using asymptotic notation this result can be restated as
This yields Euclid's theorem, since
Bertrand–Chebyshev theorem
In number theory, Bertrand's postulate is a theorem stating that for any integer , there always exists at least one prime number such that
Equivalently, writing for the prime-counting function (the number of primes less than or equal to ), the theorem asserts that for all .
This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all numbers in the interval
His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem.
Notes
References
External links
Euclid's Elements, Book IX, Prop. 20 (Euclid's proof, on David Joyce's website at Clark University)
Articles containing proofs
Theorems about prime numbers | Euclid's theorem | [
"Mathematics"
] | 2,420 | [
"Mathematical objects",
"Infinity",
"Theorems about prime numbers",
"Theorems in number theory",
"Articles containing proofs"
] |
3,144,661 | https://en.wikipedia.org/wiki/Yakov%20Frenkel |
Yakov Il'ich Frenkel (; 10 February 1894 – 23 January 1952) was a Soviet physicist renowned for his works in the field of condensed-matter physics. He is also known as Jacov Frenkel, frequently using the name J. Frenkel in publications in English.
Early years
He was born to a Jewish family in Rostov on Don, in the Don Host Oblast of the Russian Empire on 10 February 1894. His father was involved in revolutionary activities and spent some time in internal exile to Siberia; after the danger of pogroms started looming in 1905, the family spent some time in Switzerland, where Yakov Frenkel began his education. In 1912, while studying in the Karl May Gymnasium in St. Petersburg, he completed his first physics work on the Earth's magnetic field and atmospheric electricity. This work attracted Abram Ioffe's attention and later led to collaboration with him. He considered moving to the USA (which he visited in the summer of 1913, supported by money hard-earned by tutoring) but was nevertheless admitted to St. Petersburg University in the winter semester of 1913, at which point any emigration plans ended. Frenkel graduated from the university in three years and remained there to prepare for a professorship (his oral exam for the master's degree was delayed due to the events of the October revolution). His first scientific paper came to light in 1917.
Early scientific career
In the last years of the Great War and until 1921 Frenkel was involved (along with Igor Tamm) in the foundation of the University in Crimea (his family moved to Crimea due to the deteriorating health of his mother). From 1921 till the end of his life, Frenkel worked at the Physico-Technical Institute. Beginning in 1922, Frenkel published a book virtually every year. In 1924, he published 16 papers (of which 5 were basically German translations of his other publications in Russian), three books, and edited multiple translations. He was the author of the first theoretical course in the Soviet Union. For his distinguished scientific service, he was elected a corresponding member of the USSR Academy of Sciences in 1929.
He married Sara Isakovna Gordin in 1920. They had two sons, Sergei and Viktor (Victor). He served as a visiting professor at the University of Minnesota in the United States for a short period of time around 1930.
Early works of Yakov Frenkel focused on electrodynamics, statistical mechanics and relativity, though he soon switched to the quantum theory. Paul Ehrenfest, whom he met at a conference in Leningrad, encouraged him to go abroad for collaborations which he did in 1925–1926, mainly in Hamburg and Göttingen, and met with Albert Einstein in Berlin. It was during this period when Schrödinger published his groundbeaking papers on wave mechanics; Heisenberg's had appeared shortly before. Frenkel enthusiastically entered the field through discussions (he reportedly discovered what is now called the Klein–Gordon equation simultaneously with Oskar Klein) but his first scientific paper on the matter (considering electrodynamics in metals) was published in 1927.
In 1927–1930, he discovered the reason for the existence of domains in ferromagnetics; worked on the theory of resonance broadening and collision broadening of the spectral lines; developed a theory of electric resistance on the boundary of two metals and of a metal and a semiconductor.
Celebrated discoveries
In conducting research on the molecular theory of the condensed state (1926), he introduced the notion of the hole in a crystal, three years before Paul Dirac introduced his eponymous sea. The Frenkel defect became firmly established in the physics of solids and liquids. In the 1930s, his research was supplemented with works on the theory of plastic deformation. His theory, now known as the Frenkel–Kontorova model, is important in the study of dislocations. Tatyana Kontorova was then a PhD candidate working with Frenkel.
In 1930 to 1931, Frenkel showed that neutral excitation of a crystal by light is possible, with an electron remaining bound to a hole created at a lattice site identified as a quasiparticle, the exciton. Mention should be made of Frenkel's works on the theory of metals, nuclear physics (the liquid drop model of the nucleus, in 1936), and semiconductors.
In 1930, his son Viktor Frenkel was born. Viktor became a prominent historian of science, writing a number of biographies of prominent physicists including an enlarged version of Yakov Ilich Frenkel, published in 1996.
In 1934, Frenkel outlined the formalism for the multi-configuration self-consistent field method, later rediscovered and developed by Douglas Hartree.
He contributed to semiconductor and insulator physics by proposing a theory, which is now commonly known as the Poole–Frenkel effect, in 1938. "Poole" refers to H. H. Poole (Horace Hewitt Poole, 1886–1962), Ireland. Poole reported experimental results on the conduction in insulators and found an empirical relationship between conductivity and electrical field. Frenkel later developed a microscopic model, similar to the Schottky effect, to explain Poole's results more accurately. In this paper published in USA, Frenkel only very briefly mentioned an empirical relationship as Poole's law. Frenkel cited Poole's paper when he wrote a longer article in a Soviet journal.
During the 1930s, Frenkel and Ioffe opposed dangerous tendencies in Soviet physics, tying science to the materialist ideology, with remarkable courage. Soviet physics, as a result of these actions, never descended to the depths biology did. Still, he subsequently had to forgo publishing several papers, fearing that might have unfortunate consequences.
Yakov Frenkel was involved in the studies of the liquid phase, too, since the mid-1930s (he undertook some research in colloids) and during the World War II, when the institute was evacuated to Kazan. The results of his more than twenty years of study of the theory of liquid state were generalized in the classic monograph "Kinetic theory of liquids".
Later years
During the wartime, he worked on contemporary practical problems to help his country in sustaining the harsh fight. After the war, Frenkel focussed on seismoelectrics, also proposing that sound waves in metals might affect electric phenomena. He subsequently worked mainly in the field of atmospheric effects, but did not abandon his other interests, publishing several papers in nuclear physics.
Frenkel died in Leningrad in 1952. His son, Victor Frenkel, wrote a biography of his father, Yakov Ilich Frenkel: His work, life and letters. This book, originally written in Russian, has also been translated and published in English.
See also
Chandrasekhar limit
Poromechanics
Solid state ionics
References
English translations of books by Frenkel
, 2nd edition ( Dover Publications, 1950),
Literature
Victor Frenkel|Victor Yakovlevich Frenkel: Yakov Illich Frenkel. His work, life and letters. (original: (ru) Яков Ильич Френкель, translated by Alexander S. Silbergleit), Birkhäuser, Basel / Boston / Berlin 2001, (English).
Online
External links
Biography of Jacov Il'ich Frenkel
1894 births
1952 deaths
Scientists from Rostov-on-Don
People from Don Host Oblast
Russian materials scientists
Jewish Russian physicists
Soviet physicists
Soviet nuclear physicists
Corresponding Members of the USSR Academy of Sciences
Condensed matter physicists
Russian scientists | Yakov Frenkel | [
"Physics",
"Materials_science"
] | 1,568 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
17,623,931 | https://en.wikipedia.org/wiki/Kinesin%208 | The Kinesin 8 Family are a subfamily of the molecular motor proteins known as kinesins. Most kinesins transport materials or cargo around the cell while traversing along microtubule polymer tracks with the help of ATP-hydrolysis-created energy. The Kinesin 8 family has been shown to play an important role in chromosome alignment during mitosis. Kinesin 8 family members KIF18A in humans and Kip3 in yeast have been shown to be in vivo plus-end directed microtubule depolymerizers. During prometaphase of mitosis, the microtubules attach to the kinetochores of sister chromatids. Kinesin 8 is thought to play some role in this process, as knockdown of this protein via siRNA produces a phenotype of sister chromatids that are unable to align properly.
References
External links
Video Illustrations of Kinesin 8 depletion
Motor proteins | Kinesin 8 | [
"Chemistry"
] | 196 | [
"Molecular machines",
"Motor proteins"
] |
17,626,196 | https://en.wikipedia.org/wiki/IPTC%207901 | IPTC 7901 is a news service text markup specification published by the International Press Telecommunications Council that was designed to standardize the content and structure of text news articles. It was formally approved in 1979, and is still the world's most common way of transmitting news articles to newspapers, web sites and broadcasters from news services.
Using fixed metadata fields and a series of control and other special characters, IPTC 7901 was designed to feed text stories to both teleprinters and computer-based news editing systems. Stories can be assigned to broad categories (such as sports or culture) and be given a higher or lower priority based upon importance.
Although superseded in the early 1990s by IPTC Information Interchange Model and later by the XML-based News Industry Text Format, 7901's huge existing user base has persisted.
IPTC 7901 is closely related to ANPA-1312 (also known as ANPA 84-2 and later 89-3) of the Newspaper Association of America.
C0 control codes
The standard replaces several of the ASCII control codes:
External links
IPTC Website
specification on iptc.org
References
Metadata | IPTC 7901 | [
"Technology"
] | 231 | [
"Computing stubs",
"Metadata",
"Data"
] |
1,625,082 | https://en.wikipedia.org/wiki/Weak%20isospin | In particle physics, weak isospin is a quantum number relating to the electrically charged part of the weak interaction: Particles with half-integer weak isospin can interact with the bosons; particles with zero weak isospin do not.
Weak isospin is a construct parallel to the idea of isospin under the strong interaction. Weak isospin is usually given the symbol or , with the third component written as or is more important than ; typically "weak isospin" is used as short form of the proper term "3rd component of weak isospin". It can be understood as the eigenvalue of a charge operator.
Notation
This article uses and for weak isospin and its projection.
Regarding ambiguous notation, is also used to represent the 'normal' (strong force) isospin, same for its third component a.k.a. or . Aggravating the confusion, is also used as the symbol for the Topness quantum number.
Conservation law
The weak isospin conservation law relates to the conservation of weak interactions conserve . It is also conserved by the electromagnetic and strong interactions. However, interaction with the Higgs field does not conserve , as directly seen in propagating fermions, which mix their chiralities by the mass terms that result from their Higgs couplings. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time, even in vacuum. Interaction with the Higgs field changes particles' weak isospin (and weak hypercharge). Only a specific combination of electric charge is conserved.
The electric charge, is related to weak isospin, and weak hypercharge, by
In 1961 Sheldon Glashow proposed this relation by analogy to the Gell-Mann–Nishijima formula for charge to isospin.
Relation with chirality
Fermions with negative chirality (also called "left-handed" fermions) have and can be grouped into doublets with that behave the same way under the weak interaction. By convention, electrically charged fermions are assigned with the same sign as their electric charge.
For example, up-type quarks (u, c, t) have and always transform into down-type quarks (d, s, b), which have and vice versa. On the other hand, a quark never decays weakly into a quark of the same Something similar happens with left-handed leptons, which exist as doublets containing a charged lepton (, , ) with and a neutrino (, , ) with In all cases, the corresponding anti-fermion has reversed chirality ("right-handed" antifermion) and reversed sign
Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have and form singlets that do not undergo charged weak interactions.
Particles with do not interact with ; however, they do all interact with the .
Neutrinos
Lacking any distinguishing electric charge, neutrinos and antineutrinos are assigned the opposite their corresponding charged lepton; hence, all left-handed neutrinos are paired with negatively charged left-handed leptons with so those neutrinos have Since right-handed antineutrinos are paired with positively charged right-handed anti-leptons with those antineutrinos are assigned The same result follows from particle-antiparticle charge & parity reversal, between left-handed neutrinos () and right-handed antineutrinos ().
Weak isospin and the W bosons
The symmetry associated with weak isospin is SU(2) and requires gauge bosons with (, , and ) to mediate transformations between fermions with half-integer weak isospin charges. implies that bosons have three different values of
boson is emitted in transitions →
boson would be emitted in weak interactions where does not change, such as neutrino scattering.
boson is emitted in transitions → .
Under electroweak unification, the boson mixes with the weak hypercharge gauge boson ; both have This results in the observed boson and the photon of quantum electrodynamics; the resulting and likewise have zero weak isospin.
See also
Weak hypercharge
Weak charge
Mathematical formulation of the Standard Model
Footnotes
References
Standard Model
Flavour (particle physics)
Electroweak theory
he:איזוספין חלש | Weak isospin | [
"Physics"
] | 952 | [
"Standard Model",
"Physical phenomena",
"Electroweak theory",
"Fundamental interactions",
"Particle physics"
] |
1,625,876 | https://en.wikipedia.org/wiki/Parkes%20process | The Parkes process is a pyrometallurgical industrial process for removing silver from lead during the production of bullion. It is an example of liquid–liquid extraction.
The process takes advantage of two liquid-state properties of zinc. The first is that zinc is immiscible with lead, and the other is that silver is 3000 times more soluble in zinc than it is in lead. When zinc is added to liquid lead that contains silver as a contaminant, the silver preferentially migrates into the zinc. Because the zinc is immiscible in the lead it remains in a separate layer and is easily removed. The zinc-silver solution is then heated until the zinc vaporizes, leaving nearly pure silver. If gold is present in the liquid lead, it can also be removed and isolated by the same process.
The process was patented by Alexander Parkes in 1850. Parkes received two additional patents in 1852.
The Parkes process was not adopted in the United States, due to the low native production of lead. The problems were overcome during the 1880s and by 1923 only when the Parkes process was used.
See also
Lead smelter
Pattison's Process
Patio process
References
Lead
Silver
Metallurgical processes | Parkes process | [
"Chemistry",
"Materials_science"
] | 255 | [
"Metallurgical processes",
"Metallurgy"
] |
1,626,485 | https://en.wikipedia.org/wiki/Gromatici | Gromatici (from Latin groma or gruma, a surveyor's pole) or agrimensores was the name for land surveyors amongst the ancient Romans. The "gromatic writers" were technical writers who codified their techniques of surveying, most of whose preserved writings are found in the Corpus Agrimensorum Romanorum.
History
Roman Republic
At the foundation of a colony and the assignation of lands the auspices were taken, for which purpose the presence of the augur was necessary. But the business of the augur did not extend beyond the religious part of the ceremony: the division and measurement of the land were made by professional measurers. These were the finitores mentioned by the early writers, who in the later periods were called mensores and agrimensores. The business of a finitor could only be done by a free man, and the honourable nature of his office is indicated by the rule that there was no bargain for his services, but he received his pay in the form of a gift. These finitores appear also to have acted as judices, under the name of arbitri (single arbiter), in those disputes about boundaries which were purely of a technical, not a legal, character. The first professional surveyor mentioned is Lucius Decidius Saxa, who was employed by Mark Antony in the measurement of camps.
Roman Empire
Under the empire the observance of the auspices in the fixing of camps and the establishment of military colonies was less regarded, and the practice of the agrimensores was greatly increased. The distribution of land amongst the veterans, the increase in the number of military colonies, the settlement of Italian peasants in the provinces, the general survey of the empire under Augustus, the separation of private and state domains, led to the establishment of a recognized professional corporation of surveyors. The practice was also codified as a system by technical writers such as Julius Frontinus, Hyginus, Siculus Flaccus, and other Gromatic writers, as they are sometimes termed. The teachers of geometry in the large cities of the empire used to give practical instruction on the system of gromatics. This practical geometry was one of the liberalia studia; but the professors of geometry and the teachers of law were not exempted from the obligation of being tutores, and from other such burdens,<ref>Frag. Vat. § 150</ref> a fact which shows the subordinate rank which the teachers of elementary science then held.
The agrimensor could mark out the limits of the centuriae, and restore the boundaries where they were confused, but he could not assign without a commission from the emperor. Military persons of various classes are also sometimes mentioned as practising surveying, and settling disputes about boundaries. The lower rank of the professional agrimensor, as contrasted with the finitor of earlier periods, is shown by the fact that in the imperial period there might be a contract with an agrimensor for paying him for his services.
Late empire
The agrimensor of the later period was merely employed in disputes as to the boundaries of properties. The foundation of colonies and the assignation of lands were now less common, though we read of colonies being established to a late period of the empire, and the boundaries of the lands must have been set out in due form. Those who marked out the ground in camps for the soldiers' tents are also called mensores, but they were military men. The functions of the agrimensor are shown by a passage of Hyginus, in all questions as to determining boundaries by means of the marks (signa), the area of surfaces, and explaining maps and plans, the services of the agrimensor were required: in all questions that concerned property, right of road, enjoyment of water, and other easements (servitutes) they were not required, for these were purely legal questions. Generally, therefore, they were either employed by the parties themselves to settle boundaries, or they received their instructions for that purpose from a judex. In this capacity they were advocati. But they also acted as judices, and could give a final decision in that class of smaller questions which concerned the quinque pedes of the Lex Mamilia (the law setting which boundary spaces were not subject to usucapio), as appears from Frontinus.
Under the Christian emperors the name mensores was changed into agrimensores to distinguish them from another class of mensores, who are mentioned in the codes of Theodosius I and Justinian I. By a rescript of Constantine I and Constans (344 AD) the teachers and learners of geometry received immunity from civil burdens. According to a constitution of Theodosius II and Valentinian III (440 AD), they received jurisdiction in questions of alluvio; but some writers disagree that this crucial passage is genuine. According to another constitution of the same emperors, the agrimensor was to receive an aureus from each of any three bordering proprietors whose boundaries he settled, and if he set a limes right between proprietors, he received an aureus for each twelfth part of the property through which fee restored the limes. Further, by another constitution of the same emperors, the young agrimensores were to be called "clarissimi" while they were students, and when they began to practise their profession, "spectabiles".Jean-Baptiste Dureau de la Malle. Economie Politique des Romains, vol. i. p. 170
Writers and works
The earliest of the gromatic writers was Frontinus, whose De agrorum qualitate, dealing with the legal aspect of the art, was the subject of a commentary by Aggenus Urbicus, a Christian schoolmaster. Under Trajan a certain Balbus, who had accompanied the emperor on his Dacian campaign, wrote a still extant manual of geometry for land surveyors (Expositio et ratio omnium formarum or mensurarum, probably after a Greek original by Hero), dedicated to a certain Celsus who had invented an improvement in a gromatic instrument (perhaps the dioptra, resembling the modern theodolite); for the treatises of Hyginus see that name.
Somewhat later than Trajan was Siculus Flaccus (De condicionibus agrorum, extant), while the most curious treatise on the subject, written in barbarous Latin and entitled Casae litterarum (long a school textbook) is the work of a certain Innocentius (4th-5th century). It is doubtful whether Boetius is the author of the treatises attributed to him. The Gromatici veteres also contains extracts from official registers (probably belonging to the 5th century) of colonial and other land surveys, lists and descriptions of boundary stones, and extracts from the Theodosian Codex.
According to Mommsen, the collection had its origin during the 5th century in the office of a vicarius (diocesan governor) of Rome, who had a number of surveyors under him. The surveyors were known by various names: decempedator (with reference to the instrument used); finitor, metator or mensor castrorum in republican times; togati Augustorum as imperial civil officials; professor, auctor as professional instructors.
The best edition of the Gromatici is by Karl Lachmann and others (1848) with supplementary volume, Die Schriften der römischen Feldmesser (1852). The 1913 edition of Carl Olof Thulin contains only a few works. The 2000 edition of Brian Campbell is much broader and also contains an English translation.
See also
Bematist
Triangulation (surveying)#History
References
Further reading
Campbell, Brian. 1996. "Shaping the Rural Environment: Surveyors in Ancient Rome." Journal of Roman Studies 86:74–99.
Campbell, J. B. 2000. The Writings of the Roman Land Surveyors: Introduction, Text, Translation and Commentary. London: Society for the Promotion of Roman Studies.
Classen, C. Joachim. 1994. "On the Training of the Agrimensores in Republican Rome and Related Problems: Some Preliminary Observations." Illinois Classical Studies 19:161-170.
Cuomo, Serafina. 2000. "Divide and Rule: Frontinus and Roman Land-Surveying." Studies in the History and Philosophy of Science 31A:189–202.
Dilke, Oswald Ashton Wentworth. 1967. "Illustrations from Roman Surveyors’ Manuals." Imago Mundi 21:9–29.
Dilke, Oswald Ashton Wentworth. 1971. The Roman Land Surveyors: An Introduction to the Agrimensores. Newton Abbot, UK: David and Charles.
Duncan-Jones, R. P. 1976. "Some Configurations of Landholding in the Roman Empire." In Studies in Roman Property. Edited by M. I. Finley, 7–24. Cambridge, UK, and New York: Cambridge Univ. Press.
Gargola, Daniel J. 1995. Lands, Laws and Gods: Magistrates and Ceremony in the Regulation of Public Lands in Republican Rome. Chapel Hill: Univ. of North Carolina Press.
Lewis, Michael Jonathan Taunton. 2001. Surveying Instruments of Greece and Rome. Cambridge, UK, and New York: Cambridge Univ. Press.
Nicolet, Claude. 1991. "Control of the Fiscal Sphere: The Cadastres." In Space, Geography, and Politics in the Early Roman Empire.'' By Claude Nicolet, 149–169. Ann Arbor: Univ. of Michigan Press.
Surveying
Ancient Roman technology
History of measurement | Gromatici | [
"Engineering"
] | 2,014 | [
"Surveying",
"Civil engineering"
] |
1,627,004 | https://en.wikipedia.org/wiki/Javelin%20argument | The javelin argument, credited to Lucretius, is an ancient logical argument that the universe, or cosmological space, must be infinite. The javelin argument was used to support the Epicurean thesis about the universe. It was also constructed to counter the Aristotelian view that the universe is finite.
Overview
Lucretius introduced the concept of the javelin argument in his discourse of space and how it can be bound. He explained:
For whatever bounds it, that thing must itself be bounded likewise; and to this bounding thing there must be a bound again, and so on for ever and ever throughout all immensity. Suppose, however, for a moment, all existing space to be bounded, and that a man runs forward to the uttermost borders, and stands upon the last verge of things, and then hurls forward a winged javelin,— suppose you that the dart, when hurled by the vivid force, shall take its way to the point the darter aimed at, or that something will take its stand in the path of its flight, and arrest it? For one or other of these things must happen. There is a dilemma here that you never can escape from.
The javelin argument has two implications. If the hurled javelin flew onwards unhindered, it meant that the man running was not at the edge of the universe because there is something beyond the edge where the weapon flew. On the other hand, if it did not, the man was still not at the edge because there must be an obstruction beyond that stopped the javelin. However, the argument assumes incorrectly that a finite universe must necessarily have a "limit" or edge. The argument fails in the case that the universe might be shaped like the surface of a hypersphere or torus. (Consider a similar fallacious argument that the Earth's surface must be infinite in area: because otherwise one could go to the Earth's edge and throw a javelin, proving that the Earth's surface continued wherever the javelin hit the ground.)
References
Arguments
Epicureanism
Atomism
Ancient Greek physics
Physical cosmology | Javelin argument | [
"Physics",
"Astronomy"
] | 426 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
1,627,114 | https://en.wikipedia.org/wiki/Baker%27s%20map | In dynamical systems theory, the baker's map is a chaotic map from the unit square into itself. It is named after a kneading operation that bakers apply to dough: the dough is cut in half, and the two halves are stacked on one another, and compressed.
The baker's map can be understood as the bilateral shift operator of a bi-infinite two-state lattice model. The baker's map is topologically conjugate to the horseshoe map. In physics, a chain of coupled baker's maps can be used to model deterministic diffusion.
As with many deterministic dynamical systems, the baker's map is studied by its action on the space of functions defined on the unit square. The baker's map defines an operator on the space of functions, known as the transfer operator of the map. The baker's map is an exactly solvable model of deterministic chaos, in that the eigenfunctions and eigenvalues of the transfer operator can be explicitly determined.
Formal definition
There are two alternative definitions of the baker's map which are in common use. One definition folds over or rotates one of the sliced halves before joining it (similar to the horseshoe map) and the other does not.
The folded baker's map acts on the unit square as
When the upper section is not folded over, the map may be written as
The folded baker's map is a two-dimensional analog of the tent map
while the unfolded map is analogous to the Bernoulli map. Both maps are topologically conjugate. The Bernoulli map can be understood as the map that progressively lops digits off the dyadic expansion of x. Unlike the tent map, the baker's map is invertible.
Properties
The baker's map preserves the two-dimensional Lebesgue measure.
The map is strong mixing and it is topologically mixing.
The transfer operator maps functions on the unit square to other functions on the unit square; it is given by
The transfer operator is unitary on the Hilbert space of square-integrable functions on the unit square. The spectrum is continuous, and because the operator is unitary the eigenvalues lie on the unit circle. The transfer operator is not unitary on the space of functions polynomial in the first coordinate and square-integrable in the second. On this space, it has a discrete, non-unitary, decaying spectrum.
As a shift operator
The baker's map can be understood as the two-sided shift operator on the symbolic dynamics of a one-dimensional lattice. Consider, for example, the bi-infinite string
where each position in the string may take one of the two binary values . The action of the shift operator on this string is
that is, each lattice position is shifted over by one to the left. The bi-infinite string may be represented by two real numbers as
and
In this representation, the shift operator has the form
which is seen to be the unfolded baker's map given above.
See also
Bernoulli process
References
Ronald J. Fox, "Construction of the Jordan basis for the Baker map", Chaos, 7 p 254 (1997)
Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic Publishers, Dordrecht Netherlands (Exposition of the eigenfunctions the Baker's map).
Chaotic maps
Exactly solvable models
Articles containing video clips | Baker's map | [
"Mathematics"
] | 705 | [
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
1,627,862 | https://en.wikipedia.org/wiki/Eclipsed%20conformation | In chemistry an eclipsed conformation is a conformation in which two substituents X and Y on adjacent atoms A, B are in closest proximity, implying that the torsion angle X–A–B–Y is 0°. Such a conformation can exist in any open chain, single chemical bond connecting two sp3-hybridised atoms, and it is normally a conformational energy maximum. This maximum is often explained by steric hindrance, but its origins sometimes actually lie in hyperconjugation (as when the eclipsing interaction is of two hydrogen atoms).
In the example of ethane, two methyl groups are connected with a carbon-carbon sigma bond, just as one might connect two Lego pieces through a single "stud" and "tube". With this image in mind, if the methyl groups are rotated around the bond, they will remain connected; however, the shape will change. This leads to multiple possible three-dimensional arrangements, known as conformations, conformational isomers (conformers), or sometimes rotational isomers (rotamers).
Organic chemistry
Conformations can be described by dihedral angles, which are used to determine the placements of atoms and their distance from one another and can be visualized by Newman projections. A dihedral angle can indicate staggered and eclipsed orientation, but is specifically used to determine the angle between two specific atoms on opposing carbons. Different conformations have unequal energies, creating an energy barrier to bond rotation which is known as torsional strain. In particular, eclipsed conformations tend to have raised energies due to the repulsion of the electron clouds of the eclipsed substituents. The relative energies of different conformations can be visualized using graphs. In the example of ethane, such a graph shows that rotation around the carbon-carbon bond is not entirely free but that an energy barrier exists. The ethane molecule in the eclipsed conformation is said to suffer from torsional strain, and by a rotation around the carbon carbon bond to the staggered conformation around 12.5 kJ/mol of torsional energy is released. In the case of butane and its four-carbon chain, three carbon-carbon bonds are available to rotate. The example below is looking down the C2 and C3 bond. Below is the sawhorse and Newman representation of butane in an eclipsed conformation with the two CH3 groups (C1 and C4) at a 0-degree angle from one another (left).
If the front is rotated 60° clockwise, the butane molecule is now in a staggered conformation (right). This conformation is more specifically referred to as the gauche conformation of butane. This is due to the fact that the methyl groups are staggered, but only 60° from one another. This conformation is more energetically favored than the eclipsed conformation, but it is not the most energetically favorable conformation. Another 60° rotation gives us a second eclipsed conformation where both methyl groups are aligned with hydrogen atoms. One more 60 rotation produces another staggered conformation referred to as the anti conformation. This occurs when the methyl groups are positioned opposite (180°) of one another. This is the most energetically favorable conformation.
The minima can be seen on the graph at 60, 180 and 300 degrees while the maxima can bee see at 0, 120, 240, and 360 degrees. The maxima represent the eclipsed conformations due to the dihedral angle of zero degrees.
Structural applications
As established by X-ray crystallography, octachlorodimolybdate(II) anion ([Mo2Cl8]4-) has an eclipsed conformation. This sterically unfavorable geometry is given as evidence for a quadruple bond between the Mo centers.
Experiments such as X-ray and electron diffraction analyses, nuclear magnetic resonance, microwave spectroscopies, and more have allowed researchers to determine which cycloalkane structures are the most stable based on the different possible conformations. Another method that was shown successful is molecular mechanics, a computational method that allows the total strain energies of different conformations to be found and analyzed. It was found that the most stable conformations had lower energies based on values of energy due to bond distances and bond angles.
In many cases, isomers of alkanes with branched chains have lower boiling points than those that are unbranched, which has been shown through experimentation with isomers of C8H18. This is because of a combination of intermolecular forces and size that results from the branched chains. The more branches that an alkane has, the more extended its shape is; meanwhile, if it is less branched then it will have more intermolecular attractive forces that will need to be broken which is the cause of the increased boiling point for unbranched alkanes. In another case, 2,2,3,3-tetramethylbutane is shaped more like an ellipsoid causing it to be able to form a crystal lattice which raises the melting point of the molecule because it will take more energy to transition from a solid to a liquid state.
See also
Gauche effect
References
Stereochemistry | Eclipsed conformation | [
"Physics",
"Chemistry"
] | 1,086 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
1,628,483 | https://en.wikipedia.org/wiki/Dodecahedrane | Dodecahedrane is a chemical compound, a hydrocarbon with formula , whose carbon atoms are arranged as the vertices (corners) of a regular dodecahedron. Each carbon is bound to three neighbouring carbon atoms and to a hydrogen atom. This compound is one of the three possible Platonic hydrocarbons, the other two being cubane and tetrahedrane.
Dodecahedrane does not occur in nature and has no significant uses. It was synthesized by Leo Paquette in 1982, primarily for the "aesthetically pleasing symmetry of the dodecahedral framework".
For many years, dodecahedrane was the simplest real carbon-based molecule with full icosahedral symmetry. Buckminsterfullerene (), discovered in 1985, also has the same symmetry, but has three times as many carbons and 50% more atoms overall. The synthesis of the C20 fullerene in 2000, from brominated dodecahedrane, may have demoted to second place.
Structure
The angle between the C-C bonds in each carbon atom is 108°, which is the angle between adjacent sides of a regular pentagon. That value is quite close to the 109.5° central angle of a regular tetrahedron—the ideal angle between the bonds on an atom that has sp3 hybridisation. As a result, there is minimal angle strain. However, the molecule has significant levels of torsional strain as a result of the eclipsed conformation along each edge of the structure.
The molecule has perfect icosahedral (Ih) symmetry, as evidenced by its proton NMR spectrum in which all hydrogen atoms appear at a single chemical shift of 3.38 ppm. Unlike buckminsterfullerene, dodecahedrane has no delocalized electrons and hence has no aromaticity.
History
For over 30 years, several research groups actively pursued the total synthesis of dodecahedrane. A review article published in 1978 described the different strategies that existed up to then. The first attempt was initiated in 1964 by R.B. Woodward with the synthesis of the compound triquinacene which was thought to be able to simply dimerize to dodecahedrane. Other groups were also in the race, for example that of Philip Eaton and Paul von Ragué Schleyer.
Leo Paquette's group at Ohio State University was the first to succeed, by a complex 29-step route that mostly builds the dodecahedral skeleton one ring at a time, and finally closes the last hole.
In 1987, more versatile alternative synthesis route was found by the Horst Prinzbach's group. Their approach was based on the isomerization pagodane, obtained from isodrin (isomer of aldrin) as starting material i.a. through [6+6]photocycloaddition. Schleyer had followed a similar approach in his synthesis of adamantane.
Following that idea, joint efforts of the Prinzbach team and the Schleyer group succeeded but obtained only 8% yield for the conversion at best. In the following decade the group greatly optimized that route, so that dodecahedrane could be obtained in multi-gram quantities. The new route also made it easier to obtain derivatives with selected substitutions and unsaturated carbon-carbon bonds. Two significant developments were the discovery of σ-bishomoaromaticity and the formation of C20 fullerene from highly brominated dodecahedrane species.
Synthesis
Original route
Paquette's 1982 organic synthesis takes about 29 steps with raw materials cyclopentadiene (2 equivalents 10 carbon atoms), dimethyl acetylenedicarboxylate (4 carbon atoms) and allyltrimethylsilane (2 equivalents, 6 carbon atoms).
In the first leg of the procedure two molecules of cyclopentadiene 1 are coupled together by reaction with elemental sodium (forming the cyclopentadienyl complex) and iodine to dihydrofulvalene 2. Next up is a tandem Diels–Alder reaction with dimethyl acetylenedicarboxylate 3 with desired sequence pentadiene-acetylene-pentadiene as in symmetrical adduct 4. An equal amount of asymmetric pentadiene-pentadiene-acetylene compound (4b) is formed and discarded.
{|align="center" class="wikitable" style="font-size:small"
|
|valign=top |
|-
| Dodecahedrane synthesis part I||Dodecahedrane synthesis part II
|-
|}
In the next step of the sequence iodine is temporarily introduced via an iodolactonization of the diacid of 4 to dilactone 5. The ester group is cleaved next by methanol to the halohydrin 6, the alcohol groups converted to ketone groups in 7 by Jones oxidation and the iodine groups reduced by a zinc-copper couple in 8.
{|align="center" class="wikitable" style="font-size:small"
|
|valign=top |
|-
| Dodecahedrane synthesis part III||Dodecahedrane synthesis part IV
|-
|}
The final 6 carbon atoms are inserted in a nucleophilic addition to the ketone groups of the carbanion 10 generated from allyltrimethylsilane 9 and n-butyllithium. In the next step the vinyl silane 11 reacts with peracetic acid in acetic acid in a radical substitution to the dilactone 12 followed by an intramolecular Friedel-Crafts alkylation with phosphorus pentoxide to diketone 13. This molecule contains all required 20 carbon atoms and is also symmetrical which facilitates the construction of the remaining 5 carbon-carbon bonds.
Reduction of the double bonds in 13 to 14 is accomplished with hydrogenation with palladium on carbon and that of the ketone groups to alcohol groups in 15 by sodium borohydride. Replacement of hydroxyl by chlorine in 17 via nucleophilic aliphatic substitution takes place through the dilactone 16 (tosyl chloride). The first C–C bond forming reaction is a kind of Birch alkylation (lithium, ammonia) with the immediate reaction product trapped with chloromethyl phenyl ether, the other chlorine atom in 17 is simply reduced. This temporary appendix will in a later stage prevent unwanted enolization. The newly formed ketone group then forms another C–C bond by photochemical Norrish reaction to 19 whose alcohol group is induced to eliminate with TsOH to alkene 20.
{|align="center" class="wikitable" style="font-size:small"
|
|valign=top |
|-
| Dodecahedrane synthesis part V||Dodecahedrane synthesis part VI
|-
|}
The double bond is reduced with hydrazine and sequential diisobutylaluminum hydride reduction and pyridinium chlorochromate oxidation of 21 forms the aldehyde 22. A second Norrish reaction then adds another C–C bond to alcohol 23 and having served its purpose the phenoxy tail is removed in several steps: a Birch reduction to diol 24, oxidation with pyridinium chlorochromate to ketoaldehyde 25 and a reverse Claisen condensation to ketone 26. A third Norrish reaction produces alcohol 27 and a second dehydration 28 and another reduction 29 at which point the synthesis is left completely without functional groups. The missing C-C bond is put in place by hydrogen pressurized dehydrogenation with palladium on carbon at 250 °C to dodecahedrane 30.
Pagodane route
In Prinzbach's optimized route from pagodane to dodecahedrane, the original low-yielding isomerization of parent pagodane to dodecahedrane is replaced by a longer but higher yielding sequence - which nevertheless still relies heavily on pagodane derivatives. In the scheme below, the divergence from the original happens after compound 16.
Derivatives
A variety of dodecahedrane derivatives have been synthesized and reported in the literature.
Hydrogen substitution
Substitution of all 20 hydrogens by fluorine atoms yields the relatively unstable perfluorododecahedrane C20F20, which was obtained in milligram quantities. Trace amounts of the analogous perchlorododecahedrane C20Cl20 were obtained, among other partially chlorinated derivatives, by reacting dissolved in liquid chlorine under pressure at about 140 °C and under intense light for five days. Complete replacement by heavier halogens seems increasingly difficult due to their larger size. Half or more of the hydrogen atoms can be substituted by hydroxyl groups to yield polyols, but the extreme compound C20(OH)20 remained elusive as of 2006. Amino-dodecahedranes comparable to amantadine have been prepared, but were more toxic and with weaker antiviral effects.
Annulated dodecahedrane structures have been proposed.
Encapsulation
Molecules whose framework forms a closed cage, like dodecahedrane and buckminsterfullerene, can encapsulate atoms and small molecules in the hollow space within. Those insertions are not chemically bonded to the caging compound, but merely mechanically trapped in it.
Cross, Saunders and Prinzbach succeeded in encapsulating helium atoms in dodecahedrane by shooting He+ ions at a film of the compound. They obtained microgram quantities of (the "@" being the standard notation for encapsulation), which they described as a quite stable substance. The molecule has been described as "the world's smallest helium balloon".
References
External links
Paquette's dodecahedrane synthesis at SynArchive.com
2D and 3D models of dodecahedrane and cuneane assemblies
Full text of Paquette's paper
Polycyclic nonaromatic hydrocarbons
Total synthesis
Cyclopentanes
Substances discovered in the 1980s | Dodecahedrane | [
"Chemistry"
] | 2,087 | [
"Total synthesis",
"Chemical synthesis"
] |
1,629,041 | https://en.wikipedia.org/wiki/Filter%20funnel | A filter funnel is a laboratory funnel used for separating solids from liquids via the laboratory process of filtering.
In order to achieve this, a cone-like shaped piece of filter paper is usually folded into a cone and placed within the funnel. The suspension of solid and liquid is then poured through the funnel. The solid particles are too large to pass through the filter paper and are left on the paper, while the much smaller liquid molecules pass through the paper to a vessel positioned below the funnel, producing a filtrate. The filter paper is used only once. If only the liquid is of interest, the paper is discarded; if the suspension is of interest, both the solid residue and non-polar liquids, such as oil, may clog of polyethylene or galvanized steel and using a brass or plastic mesh filter, are typically for automotive and workshop use, to filter debris from fuel, lubricating oil and coolant. The screen is reusable, and may be cleaned by inverting the funnel and tapping it on a hard surface, or popping it out and washing it separately. This helps to avoid spilling any liquids.
References
Laboratory equipment
Water filters | Filter funnel | [
"Chemistry"
] | 238 | [
"Water treatment",
"Water filters",
"Filters"
] |
1,629,320 | https://en.wikipedia.org/wiki/Attenuation%20length | In physics, the attenuation length or absorption length is the distance into a material when the probability has dropped to that a particle has not been absorbed. Alternatively, if there is a beam of particles incident on the material, the attenuation length is the distance where the intensity of the beam has dropped to , or about 63% of the particles have been stopped.
Mathematically, the probability of finding a particle at depth into the material is calculated by the Beer–Lambert law:
.
In general is material- and energy-dependent.
See also
Beer's Law
Mean free path
Attenuation coefficient
Attenuation (electromagnetic radiation)
Radiation length
References
https://web.archive.org/web/20050215215652/http://www.ct.infn.it/~rivel/Glossario/node2.html
External links
http://henke.lbl.gov/optical_constants/atten2.html
Particle physics
Experimental particle physics | Attenuation length | [
"Physics"
] | 207 | [
"Particle physics stubs",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
1,629,461 | https://en.wikipedia.org/wiki/Radiation%20length | In particle physics, the radiation length is a characteristic of a material, related to the energy loss of high energy particles electromagnetically interacting with it. It is defined as the mean length (in cm) into the material at which the energy of an electron is reduced by the factor 1/e.
Definition
In materials of high atomic number (e.g. tungsten, uranium, plutonium) the electrons of energies >~10 MeV predominantly lose energy by , and high-energy photons by pair production. The characteristic amount of matter traversed for these related interactions is called the radiation length , usually measured in g·cm−2. It is both the mean distance over which a high-energy electron loses all but of its energy by , and of the mean free path for pair production by a high-energy photon. It is also the appropriate length scale for describing high-energy electromagnetic cascades.
The radiation length for a given material consisting of a single type of nucleus can be approximated by the following expression:
where is the atomic number and is mass number of the nucleus.
For , a good approximation is
.
where
is the number density of the nucleus,
denotes the reduced Planck constant,
is the electron rest mass,
is the speed of light,
is the fine-structure constant.
For electrons at lower energies (below few tens of MeV), the energy loss by ionization is predominant.
While this definition may also be used for other electromagnetic interacting particles beyond leptons and photons, the presence of the stronger hadronic and nuclear interaction makes it a far less interesting characterisation of the material; the nuclear collision length and nuclear interaction length are more relevant.
Comprehensive tables for radiation lengths and other properties of materials are available from the Particle Data Group.
See also
Mean free path
Attenuation length
Attenuation coefficient
Attenuation
Range (particle radiation)
Stopping power (particle radiation)
Electron energy loss spectroscopy
References
Experimental particle physics | Radiation length | [
"Physics"
] | 391 | [
"Particle physics stubs",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
1,629,621 | https://en.wikipedia.org/wiki/Pull-up%20resistor | In electronic logic circuits, a pull-up resistor (PU) or pull-down resistor (PD) is a resistor used to ensure a known state for a signal. It is typically used in combination with components such as switches and transistors, which physically interrupt the connection of subsequent components to ground or to VCC. Without such resistor, closing the switch creates a direct connection to ground or VCC; when the switch is open, the rest of the circuit would be left floating (i.e. it would have an indeterminate voltage), which is generally undesirable.
For a switch that is used to connect a circuit to ground, a pull-up resistor (connected between the circuit and VCC) ensures a well-defined voltage (i.e. VCC, or logical high) when the switch is open. For a switch that is used to connect a circuit to VCC (e.g. if the switch or button is used to transmit a "high" signal), a pull-down resistor connected between the circuit and ground ensures a well-defined ground voltage (i.e. logical low) across the remainder of the circuit when the switch is open.
Principle
An open switch is not equivalent to a component with infinite impedance. The stationary voltage in any loop with an open switch cannot be determined by Kirchhoff's laws, while that with a component with infinite impedance can be determined by such laws. Consequently, the voltages across those critical components (such as the logic gate in the example on the right), which are only in loops involving the open switch, are undefined, too. A pull-up resistor effectively establishes an additional loop over the critical components, ensuring that the voltage is well-defined even when the switch is open.
Optimal resistance
For a pull-up resistor to serve only this one purpose and not interfere with the circuit otherwise, a resistor with an appropriate amount of resistance must be used. For this, it is assumed that the critical components have infinite or sufficiently high impedance, which is guaranteed, for example, for logic gates made from FETs. In this case, when the switch is open, the voltage drop across a pull-up resistor (with sufficiently low impedance) practically vanishes, and the circuit looks like a wire directly connected to VCC. On the other hand, when the switch is closed, the pull-up resistor must have sufficiently high impedance in comparison to the closed switch to not affect the connection to ground. Together, these two conditions can be used to derive an appropriate value for the impedance of the pull-up resistor. However, usually, only a lower bound is derived, assuming that the critical components do indeed have infinite impedance.
A resistor with relatively low resistance (relative to the circuit it is in) is often called a "strong" pull-up or pull-down; when the circuit is open, it will pull the output high or low very quickly (just as the voltage changes in an RC circuit), but will draw more current. A resistor with relatively high resistance is called a "weak" pull-up or pull-down; when the circuit is open, it will pull the output high or low more slowly, but will draw less current. This current, which is essentially wasted energy, only flows when the switch is closed, and technically for a brief period after it is opened until the charge built up in the circuit has been discharged to ground.
Applications
A pull-up resistor may be used when interfacing logic gates to inputs. For example, an input signal may be pulled by a resistor, then a switch or jumper strap can be used to connect that input to ground. This can be used for configuration information, to select options or for troubleshooting of a device.
Pull-up resistors may be used at logic outputs where the logic device cannot source current such as open-collector TTL logic devices. Such outputs are used for driving external devices, for a wired-OR function in combinational logic, or for a simple way of driving a logic bus with multiple devices connected to it.
Pull-up resistors may be discrete devices mounted on the same circuit board as the logic devices. Many microcontrollers intended for embedded control applications have internal, programmable pull-up resistors for logic inputs so that not many external components are needed.
Pull-down resistors can be safely used with CMOS logic gates because the inputs are voltage-controlled. TTL logic inputs that are left unconnected inherently float high, and require a much lower valued pull-down resistor to force the input low. A standard TTL input at logic "1" is normally operated assuming a source current of 40 μA, and a voltage level above 2.4 V, allowing a pull-up resistor of no more than 50 kohms; whereas the TTL input at logic "0" will be expected to sink 1.6 mA at a voltage below 0.8 V, requiring a pull-down resistor less than 500 ohms. Holding unused TTL inputs low consumes more current. For that reason, pull-up resistors are preferred in TTL circuits.
In bipolar logic families operating at 5 VDC, a typical pull-up resistor value will be 1000–5000 Ω, based on the requirement to provide the required logic level current over the full operating range of temperature and supply voltage. For CMOS and MOS logic, much higher values of resistor can be used, several thousand to a million ohms, since the required leakage current at a logic input is small.
Drawbacks
Some disadvantages of pull-up resistors are the extra power consumed when current is drawn through the resistor and the reduced speed of a pull-up compared to an active current source. Certain logic families are susceptible to power supply transients introduced into logic inputs through pull-up resistors, which may force the use of a separate filtered power source for the pull-ups.
See also
Rp (USB) - a specific type of pull-up resistor in USB-C connectors
Rd (USB), Ra (USB) - specific types of pull-down resistors in USB-C connectors
Three-state logic
References
Paul Horowitz and Winfield Hill, The Art of Electronics, 2nd edition, Cambridge University Press, Cambridge, England, 1989,
Electronic circuits
Resistive components
de:Open circuit#Pull-up | Pull-up resistor | [
"Physics",
"Engineering"
] | 1,331 | [
"Physical quantities",
"Electronic circuits",
"Resistive components",
"Electronic engineering",
"Electrical resistance and conductance"
] |
8,952,773 | https://en.wikipedia.org/wiki/Traveling%20screen | A traveling screen is a type of water filtration device that has a continuously moving mesh screen that is used to catch and remove debris. This type of device is usually found in water intake systems for drinking water and sewage treatment plants. Screening is considered the first step in conventional sewage treatment processes. Screening is also used in cooling water intakes in steam electric power plants, hydroelectric generators, petroleum refineries, and chemical plants. Traveling screens are used to divert fish, shellfish and other aquatic species, and debris including leaves, sticks, and trash; for the purpose of preventing damage to a facility's treatment or cooling system.
See also
Bar screen
Fish screen
References
Water filters
Water treatment | Traveling screen | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 140 | [
"Water filters",
"Water treatment",
"Filters",
"Water pollution",
"Civil engineering",
"Civil engineering stubs",
"Environmental engineering",
"Water technology"
] |
8,956,408 | https://en.wikipedia.org/wiki/Concrete%20leveling | In civil engineering, concrete leveling is a procedure that attempts to correct an uneven concrete surface by altering the foundation that the surface sits upon. It is a cheaper alternative to having replacement concrete poured and is commonly performed at small businesses and private homes as well as at factories, warehouses, airports and on roads, highways and other infrastructure.
Causes of settlement
Concrete slabs can be susceptible to settlement from a wide variety of factors, the most common being an inconsistency of moisture in the soil. Soil expands and contracts as the levels of moisture fluctuate during the dry and rainy seasons. In some parts of the United States, naturally occurring soils can consolidate over time, including areas ranging from Texas up through to Wisconsin. Soil erosion also contributes to concrete settlement, which is common for locations with improper drainage. Concrete slabs built upon filled-in land can excessively settle as well. This is common for homes with basement levels since the backfill on the outside of the foundation frequently is not compacted properly. In some cases, poorly designed sidewalk or patio slabs direct water towards the basement level of a structure. Tree roots can also have an impact on concrete as well, actually powerful enough to lift a slab upwards or breakthrough entirely; this is common along public roadways, especially within metropolitan areas.
Concrete settlement, uneven concrete surfaces, and uneven footings can also be caused by seismic activity especially in earthquake-prone countries including Japan, New Zealand, Turkey, and the United States.
Slabjacking
"Slabjacking" is a specialty concrete repair technology. In essence, slabjacking attempts to lift a sunken concrete slab by pumping a substance through the concrete, effectively pushing it up from below. The process is also commonly referred to as "mudjacking" and "pressure grouting.”
Accounts of raising large concrete slabs through the use of hydraulic pressure date back to the early 20th century. Early contractors used a mixture of locally available soils (sometimes including crushed limestone and/or cement for strength), producing a "mud-like" substance and thus the term "mudjacking." In recent years, some slabjacking contractors began using expanding polyurethane foam. Each method has its benefits and disadvantages.
The slabjacking process generally starts with drilling access holes in the concrete, strategically located to maximize lift. These holes range in size from 3/8" up to 3" depending on the process used.
Initial material injections fill any under slab void space. Once the void space is filled, subsequent injections will start lifting the concrete within minutes. After the slabs are lifted, the access holes are patched and the work is complete. The process is rapid when compared to traditional remove and replace applications and is minimally disturbing to the surrounding areas.
Slabjacking technology has several benefits, including:
Cost – can be significantly less expensive than new concrete
Timeliness of the repair – concrete is typically usable within hours as opposed to days with new concrete
Minimal or no environmental impact – mostly due to keeping waste out of landfills
Aesthetic – does not disturb the surrounding area and landscaping
Slabjacking also has some limitations, including:
Concrete must be in fairly sound condition – if there are too many cracks, replacement might be the only option
New cracks can occur as the slab is lifted – most would have already been present, just not visible before lifting
Possible resettlement – if concrete poured on top of poorly compacted soils it can still sink further. However, this is also possible with new concrete.
Slabjacking can typically be broken down into three main process types:
Mudjacking
The term Mudjacking originates from using a mixture of topsoil and portland cements injected underground to hydraulically lift concrete slabs. Mudjacking can be achieved with a variety of mixtures. The most common being a local soil or sand blend, mixed with water and cement. Other additives may be included in the mixture for increased "pumpabilitly"/lubrication, improved strength/curing times, or as a filler. Additives that may be present include: clay/bentonite, fly ash, pond sand, pea gravel, masonry cements, or crushed lime. This process typically requires holes between 1" and 2" in diameter. This "mud" is injected under the concrete slabs, oftentimes using a movable pump that can access most slabs. Once the void under the slab is filled, the pressure builds under the slab, lifting the concrete back into place. Once in place, the holes are filled with a color-matching grout.
Benefits of mudjacking:
Low-pressure lifting of slab
Finely controlled lifting of the slab
Possible to achieve higher compressive strength than foam leveling
Budget-friendly
Equipment can access locations at longer distances than poly foam
Environmentally friendly. Accepted at concrete recycling facilities unlike poly foam and does not need to be separated from the concrete
Disadvantages of mudjacking:
Typically shorter warranty periods than what's offered by polyurethane foam contractors.
Requires more clean-up afterward compared to foam leveling
Largest holes of the three main processes
Can be a slower process due to the smaller material volume of the movable cart.
Does not resist erosion or fill voids as well as polyurethane foam.
Limestone grout leveling
This method uses a pulverized limestone, commonly called agricultural lime. mixed with water, and sometimes Portland cement, to create a slurry about the consistency of a thick milkshake. This slurry is pumped hydraulically beneath the slab through 1" holes. Because of its semi-fluid nature, it pushes against itself, filling voids beneath the slab. Once the void is filled, pressure builds, slowly lifting the slab into place. Due to the low pressure of this method, trained professionals are able to control the lift of the concrete slab precisely, without the worry of lifting too far. This also decreases the likelihood of cracking or damaging the slab further. Once the slab is lifted into place, the holes are filled with a color-matching non-shrink grout.
Even though the injection pressure is relatively low, compared to Foam Leveling, Stone Slurry Grout Leveling has a high compressive strength of 240 pounds per square inch. This is equal to 34,560 pounds of lifting force per square foot. With Portland cement added, this can increase to over 6,000 psi or 864,000 pounds per square foot. Once the slurry dries it creates a near-solid stone foundation for the leveled concrete (much like the original stone base the concrete was poured upon.)
Benefits of stone slurry grout leveling
Low-pressure lifting of slabs
Finely controlled lifting of the slab
When the limestone grout dries out, it creates a hard subsurface for the concrete slab
Smaller holes than mudjacking
Highest compressive strength among the three methods
Environmentally friendly
Budget-friendly
Disadvantages of stone slurry grout leveling:
Requires more clean-up afterward compared with foam leveling
Slabs to be lifted must typically be within 100 feet of the truck-based pumping equipment
Larger holes than foam leveling
Without sufficient cement, rainwater can erode limestone leveling materials resulting in re-settlement
Expanding structural foam leveling
Foam leveling uses polyurethane in an injection process. A two-part polymer is injected through a hole less than one inch in diameter. Although the material is injected at a higher pressure than traditional cementitious grouts, the pressure is not what causes the lifting. The expansion of the air bubbles in the injected material below the slab surface performs the actual lifting action as the liquid resin reacts and becomes a structural foam. The material injected below a slab to be lifted will first find weak soils, expanding into them in such a manner as to consolidate and cause sub-soils to become denser and fill any voids below the slab. One inherent property of expanding foams is that they will follow the path of least resistance, expanding in all directions. Another inherent property includes reaching a hydro-insensitive or hydrophobic state when cured with 100% cure times as little as 30 minutes. Closed-cell injections will not retain moisture and are not subject to erosion once in place.
Some closed-cell polymer foams have baseline lifting capabilities of 6,000 lbs per sq. ft. [CONVERT] and leveling procedures have been performed in which loads as high as 125 tons have been lifted and stabilized in a surface area of less than 900 sq. ft. Some foams are even stronger, with compressive strengths of 50 psi and 100 psi in a free rise state. That is equal to 7,200–14,000 lbs. per square ft. [CONVERT] of support.
Benefits of Expanding Structural Foam Leveling
Meets compressive strength requirements when supporting highway slabs
Consumers benefit from longer warranty periods with Polyurethane Foam
Requires less clean up than Mudjacking or Limestone Grout Leveling
Smaller holes
Mobile units can reach areas inaccessible to truck-based equipment
Does not retain moisture
Does not erode when subjected to rainwater
Disadvantages of expanding structural foam leveling:
Requires specially trained technicians to properly install
Greater technical skills required for operations and maintenance of equipment
Intense heat can build up from improper installation of polyurethane
Environmentally un-friendly as polyurethane is a plastic
Potential toxicity of Structural Foam Dust
Can stick to and permanently stain other surrounding surfaces
Derived from crude oil, shipped to refineries, manufacturing plants, consumers and ultimately to job sites creating a negative environmental impact resulting in additional output of and burning of fuel.
Intense heat can build up causing self combustion and flammability of polyurethane from improper installation
References
Concrete | Concrete leveling | [
"Engineering"
] | 1,957 | [
"Structural engineering",
"Concrete"
] |
8,956,851 | https://en.wikipedia.org/wiki/Paint%20robot | Industrial paint robots have been used for decades in automotive paint applications.
Early paint robots were hydraulic versions, which are still in use today but are of inferior quality and safety to the latest electronic offerings. The newest robots are accurate and deliver results with uniform film builds and exact thicknesses.
Originally, industrial paint robots were large and expensive, but robot prices have come down to the point that general industry can now afford the same level of automation used by the large automotive manufacturers.
The selection of modern paint robot varies much more in size and payload to allow many configurations for painting items of all sizes.
Painting robots generally have five or six axis motion, three for the base motions and up to three for applicator orientation. These robots can be used in any explosion hazard Class 1 Division 1 environment.
Industrial paint robots are designed to help standardize the distance and path the automatic sprayer takes, thus eliminating the risk of human error caused by manual spraying. Paint robots are often paired with other automatic painting equipment to maximize the efficiency and consistency of the paint finish. Rotational Bell atomizers, other automatic electrostatic or automatic conventional sprayers are mounted on the robot to provide the highest quality finish. Automatic mixing equipment will usually supply the sprayers with paint. This equipment is designed to regulate pressure and flow, which are extremely important in providing consistent paint finish. Varying levels of automatic mixing equipment can also provide features that cut down on paint waste, and energy costs.
History
The worlds first painting robot was developed at Trallfa, a wheelbarrow factory in Bryne, Norway. The development started in 1964 to aid in the painting of the wheelbarrows and to reduce human interaction with toxic paint chemicals. In 1966 the robot was recruited for production in the factory painting the trolleys and wheelbarrows. By 1969 the robot was commercialized as its own product. The first robot, TR2000, was delivered to Swedish Gustavsberg Procelain for enamelling bath tubs.
Painting robots have been around since at least 1985. They were first introduced in the automative industry, including at General Motors' plant in Michigan.
Industrial robots, including painting ones, were created to keep people out of "dangerous" jobs as well as increase productivity. Since their creation, robots have been working side by side with people in manufacturing companies.
In recent years, the painting robot has evolved past industrial use. Many inventors have taken on the idea of creating robots that can create works of art, rather than paint in just a solid color. Besides making them more creative, others have looked for ways to make the robots affordable and accessible for commercial use in places such as interior wall painting.
Uses
Automotive industry
Painting robots are used by vehicle manufacturers to do detailing work on their cars in a consistent and systematic way. Some of these robots are designed with a robotic arm that moves vertically and horizontally, to apply paint on all parts of the car. A patent granted in 1985 to the Mazda Motor Corporation also includes a door handler (a small mechanical hand) that can open and close doors on a vehicle and paint the interior.
Companies like FANUC continue to mass-produce industrial painting robots that are then sold to manufacturers for use. According to FANUC's website, these robots are useful in limiting safety hazard such as the toxicity of paint, reducing wasted materials through consistent application, and increasing productivity.
Robots are used to paint all different sized automotive parts because they can help provide consistent finish from one part to another. They are used for large exterior parts like doors, hoods, wheels, or bumpers, and also used on small interior components like knobs, consoles and glove boxes.
Aerospace and defense
Finish is also extremely important in the aerospace and defense industry. These parts require very precise specifications for safety and performance reasons. Coatings can provide erosion resistance, anti-static dissipation, and even radar evading stealth. For this reason, consistent finish on all parts is vital to ensure continuity throughout.
Aluminum extrusions & panels
Aluminum extrusion can be found in building panels, metal door and window frames, and structural extrusions that are used in the commercial building industry for protecting buildings and increase aesthetic appeal. Many panel and extrusion manufacturers are faced with slim margins. With that, comes pressure to improve quality, continue to reduce costs, produce faster and provide more customization for their consumers. Because of this, many manufacturers in aluminum extrusions and panels are using paint robots and automatic applicators to apply coatings for protection and aesthetics.
Agriculture and construction equipment
Agricultural and construction equipment finish is important because these types of machines face heavy operation in abuse from harsh environments. Coatings help to protect the machines form rust and extend their life cycle. In this industry, product branding plays a big role for many companies trying to differentiate themselves, so high quality finish is a strong factor for many manufacturers.
In order to provide a durable paint coating with strong aesthetic appeal is not an easy task and can involve several layers of different component materials. In an agricultural or construction equipment manufacturer, there are usually multiple pump configurations feeding a plural component proportioning unit that mixes the multiple components of the paint. The proportioner feeds an automatic applicator hooked up to a robot. With several passes with different coatings, consistency is also very important because it minimizes rework and downtime if it is finished right the first time.
Cookware
Cookware technology continues to evolve using different high performance coatings in order to meet the needs of chefs or people cooking at home. Different types of cookware have unique performance requirements. They need to be able to evenly conduct heat, resist abrasion and impact from repeated utensil use, provide non-stick coatings, provide maximum cleaning ability, and have strong aesthetic appeal. The same pan may need to be coated multiple times with different materials to meet all of its performance requirements.
Paint line robots are very useful paired with an automatic applicator in this environment because each part requires multiple passes with different coatings. The performance of the cookware in each of its specific requirements will hinge solely on the quality of finish of each material. Paint robots provide the same spray pattern and paint path on every pass, minimizing rework for badly finished parts.
Cosmetics
There are many different types of containers used in the cosmetic industry. Manufacturers in this industry are concerned with perfect packaging appearance, many using mirror finishes. Any surface imperfections will cause the piece to be rejected or scrapped. The problem is mirror finishes can actually amplify finish imperfections.
In order to reduce rework costs, the base coat needs to be applied to a very consistent and smooth manner with zero variation. This is accomplished by controlling flow rate from the proportioning unit, having fine atomization from the applicator and a very consistent spray pattern provided from a painting robot.
Future
There are multiple ideas people have come up with to increase the presence of painting robots in various industries. One such idea comes from technology professors; an interior wall painting robot. The design aims to make the robots “roller-based” so that it can move freely along walls and apply paint to them. The hope is to get people out of the toxicity of interior painting and decrease the amount of time it takes to finish walls. According to the designers, the robot can be made inexpensively as to make it more commercially available.
CloudPainter is a company that designs robots, whose take on the painting robot shifts from simple filling of color to a robot that has “computational creativity,” and can paint more detailed and original designs. The robot has a 3-D printed paint-head with multiple robotic arms and is programmed with artificial intelligence and deep learning.
A painting robot designed by Shunsuke Kudoh is equipped with fingered hands and stereo vision. It is capable of looking (with a digital camera eye) at an object, then, using its fingers, pick up a paintbrush and copy the object onto a canvas. The robot is relatively small and can paint small things, such as an apple.
Ai-Da, a humanoid robot created by Aidan Meller, is prompted by AI algorithms to create paintings using her robotic arm, a paintbrush, and palette.
Clockwork, a manicurist robot, uses two 3D cameras to paint a fingernail in about 30 seconds.
References
-
Painting | Paint robot | [
"Engineering"
] | 1,684 | [
"Industrial robots"
] |
8,957,070 | https://en.wikipedia.org/wiki/Europe%20bridge | Europe bridge is the name of several bridges in Europe :
In Austria as Europabrücke
Europabrücke, a bridge over the Wipp valley (1963), highest bridge in Europe until 2004 and Millau Viaduct achievement
In Belgium as Pont de l'Europe
Pont de l'Europe in Huy, over the Meuse (1980) see;
In Bulgaria and Romania
New Europe Bridge, over the Danube between Vidin in Bulgaria and Calafat in Romania (2013)
In France as Pont de l'Europe
Pont de l'Europe in Orléans, over the Loire (built in 2000) ;
Pont de l'Europe in Vichy, a dam-bridge over the Allier (1963) see;
Pont de l'Europe in Avignon over the Rhône (1975) see;
Pont de l'Europe between Strasbourg (France) and Kehl (Germany) over the Rhine (1953)
In Germany as Europabrücke
Europabrücke in Koblenz over the Moselle (1974) ;
Europabrücke in Frankfurt am Main carrying Bundesautobahn 5 over the Main (1978) ;
Europabrücke in Hamburg over the Süderelbe (1983) ;
Europabrücke in Kehl and Strasbourg (France); see France
Europabrücke in Kelheim over the Danube;
In Romania, the New Europe Bridge connecting to Bulgaria
In Switzerland as Europabrücke
Europabrücke in Zürich
Europabrücke in Randa, Switzerland, replaced in 2017 by Charles Kuonen Bridge
Bridges | Europe bridge | [
"Engineering"
] | 302 | [
"Structural engineering",
"Bridges"
] |
8,958,658 | https://en.wikipedia.org/wiki/European%20Spallation%20Source | The European Spallation Source ERIC (ESS) is a multi-disciplinary research facility currently under construction in Lund, Sweden. Its Data Management and Software Centre (DMSC) is co-located with DTU in Lyngby, Denmark. Its 13 European contributor countries are partners in the construction and operation of the ESS. The ESS is scheduled to begin its scientific user program in 2027, when the construction phase is set to be completed. The ESS will assist scientists in the tasks of observing and understanding basic atomic structures and forces, which are more challenging to do with other neutron sources in terms of lengths and time scales. The research facility is located near the MAX IV Laboratory, which conducts synchrotron radiation research. The construction of the facility began in the summer of 2014 and the first science results are planned for 2027.
During operation, the ESS will use nuclear spallation, a process in which neutrons are liberated from heavy elements by high energy protons. This is considered to be a safer process than uranium fission since the reaction requires an external energy supply which can be stopped easily. This facility is an example of a "long pulse" source (milliseconds). Furthermore, spallation produces more usable neutrons for a given amount of waste heat than fission.
The facility consists of a linear accelerator, in which protons are accelerated and collide with a rotating, helium-cooled tungsten target, generating intense pulses of neutrons. Surrounding the tungsten are baths of cryogenic hydrogen, which feed neutron supermirror guides. It operates similarly to optical fibres, directing the beams of neutrons to experimental stations, where research is performed on a range of materials.
Neutron scattering can be applied to a range of scientific explorations in physics, chemistry, geology, biology, and medicine. Neutrons serve as a probe for revealing the structure and function of matter from the microscopic down to the atomic scale, with the potential for development of new materials and processes.
During the construction, the ESS became a European Research Infrastructure Consortium, or ERIC, on 1 October 2015.
The European Investment Bank made a €50 million investment in the ESS. This investment is supported by InnovFin-EU Finance for Innovators, an initiative established by the EIB Group in collaboration with the European Commission under Horizon 2020, the EU's research and innovation program.
History
When the ISIS neutron source was built in England in 1985, its success in producing indirect images of molecular structures eventually raised the possibility of a far more powerful spallation source. By 1993, the European Neutron Scattering Association began to advocate for the construction of a new spallation source, and the project would eventually become known as the ESS.
Neutron science soon became a critical tool in the development of industrial and consumer products worldwide. So much so that the Organization for Economic Development (OECD), declared in 1999 that a new generation of high-intensity neutron sources should be built, one each in North America, Asia and Europe. Europe's challenge was its diverse collection of national governments, and an active research community numbering in the thousands. In 2001, a European roadmap for developing accelerator driven systems for nuclear waste incineration estimated that the ESS could have the beam ready for users in 2010. A European international task force gathered in Bonn, Germany in 2002 to review the findings and a positive consensus emerged to build ESS. The stakeholders group met a year later to review the task force's progress, and in 2003 a new design concept was adopted that set the course for beginning operations by 2019.
Over the next five years a selection process chose Lund, Sweden as the site of the ESS; the definitive selection of Lund was announced in Brussels, Belgium, on 28 May 2009. On 1 July 2010, the staff and operations of ESS Scandinavia were transferred from Lund University to 'European Spallation Source ESS AB', a limited liability company set up to design, construct and operate the European Spallation Source in Lund. The company's headquarters are situated in central Lund.
ESS became a European Research Infrastructure Consortium, or ERIC, on 1 October 2015. The Founding Members of the European Spallation Source ERIC are the Czech Republic, Denmark, Estonia, France, Germany, Hungary, Italy, Norway, Poland, Spain, Sweden, Switzerland and the United Kingdom.
As of 2013, the estimated cost of the facility will be about €1.843 bn. (or $1.958 bn.) Host nations Sweden and Denmark each plan to cover about half of the sum. However the negotiations about the exact contributions from every partner were still in progress. From 2010 to 30 September 2015, ESS was operated as a Swedish aktiebolag, or AB.
Site selection
Originally, three possible sites for the ESS were under consideration: Bilbao (Spain), Debrecen (Hungary) and Lund (Sweden).
On 28 May 2009, seven countries indicated support for placing ESS in Sweden. Furthermore, Switzerland and Italy indicated that they would support the site in majority. On 6 June 2009, Spain withdrew the Bilbao candidacy and signed a collaboration agreement with Sweden, supporting Lund as the main site, but with key component development work being performed in Bilbao. This effectively settled the location of the ESS; detailed economical negotiations between the participating countries then took place. On 18 December 2009, Hungary also chose to tentatively support ESS in Lund, thus withdrawing the candidacy of Debrecen.
The facility's construction began in early 2014, with an event held in September of that year. The user programme will start in 2027. The site is accessible via Lund tramway, the first new tram system in Sweden in over a century.
The linear accelerator
The ESS uses a linear accelerator (linac) to accelerate a beam of protons from the exit of its ion source at 75 keV to 2 GeV, at the entrance of the accelerator, protons are traveling at ~1% of the speed of light and at the end of the accelerator, they reach a velocity of ~95% speed of light. The accelerator uses both normal conducting and superconducting cavities.
The normal conducting cavities are Radio frequency Quadrupole, RFQ, working at a frequency of 352.21 MHz, and accelerating the proton beam up to an energy of 3.62 MeV. The next structure is a transport line for the medium energy protons, MEBT which transports the beam from the RFQ to the next structure for further acceleration. In the MEBT, the beam properties are measured, the beam is cleaned from the transverse halo around the beam, and also the head and tail of the beam pulse are cleaned using a transversally deflecting electromagnetic chopper. The Drift Tube Linac, DTL, which is the structure downstream of the MEBT accelerates the beam further to ~90 MeV. At this energy, there is a transition from normal conducting cavities to superconducting cavities.
Three families of superconducting cavities accelerate the beam to its final energy of 2 GeV, firstly a section using double-spoke cavities up to an energy of ~216 Mev, then two families of elliptical cavities which are optimized for medium and high energy proton acceleration at a frequency of 704.42 MHz. Following the elliptical cavities, a transfer-line guides the beam to the target, and just before sending the beam to the target for producing spallation neutrons expands the beam and paints the target to dissipate the generated heat over a larger area.
The linac repetition rate is 14 Hz, and the pulses of protons are 2.86 ms long, making the duty factor of the linac 4%. The beam current within the pulse is 62.5 mA, and the average beam current is 2.5 mA.
Except in the RFQ which uses the same structure and field to accelerate and focus the beam, the transverse focusing of the beam of protons is performed using magnetic lenses. In the low energy beam transport, right after the ion source, magnetic solenoids are used, in the DTL permanent quadrupole magnets are used and the rest of the linac uses electromagnetic quadrupoles.
The spallation target and its environmental impact
The ESS source will be built around a solid tungsten target, cooled by helium gas.
Radioactive substances will be generated by the spallation process, but the solid target makes the handling of these materials easier and safer than if a liquid target had been used.
ESS, E.on, and Lunds Energi are collaborating in a project aiming to get the facility to be the world's first completely sustainable large-scale research centre through investment in wind power. The ESS project is expected to include an extension of the Nysted Wind Farm.
Radioactive material storage and transport will be required, but the need is much less than that of a nuclear reactor.
ESS expects to be CO2-neutral.
Recent design improvements will decrease energy usage at ESS.
Neutron Scattering and Imaging Instruments at ESS
The target station is surrounded by instrument halls with scientific instruments placed in four sections in the cardinal directions. In the western section, science instruments are located 156 meters from the center of the target station. The distance is between 50 and 80 meters in the southern one, and the science instruments located closest to the target station are in the northern and eastern sections.
Initially, 15 different scientific instruments will be erected:
Large-scale structures:
ODIN (Imaging)
SKADI (General purpose SANS)
LoKI (Broadband SANS)
FREIA (Horizontal reflectometer)
ESTIA (Vertical reflectometer)
Diffraction:
HEIMDAL (Powder diffractometer)
DREAM (Powder diffractometer)
BEER (Engineering diffractometer)
MAGiC (Magnetism diffractometer)
NMX (Macromolecular diffractometer)
Spectroscopy:
CSPEC (Cold chopper spectrometer)
T-REX (Thermal chopper spectrometer)
BIFROST (Crystal analyser spectrometer)
VESPA (Vibrational spectrometer)
MIRACLES (Backscattering spectrometer)
ESSnuSB
The European Spallation Source neutrino Super Beam (ESSnuSB) project aims to measure leptonic CP violation at the second neutrino oscillation maximum, offering higher sensitivity than the first maximum. After 10 years of data collection, ESSnuSB is expected to cover over 70% of the CP-violating phase range with 5σ confidence level, achieving a precision better than 8° for all δCP values. The ESSnuSB+ extension project focuses on measuring neutrino-nucleus cross-sections in the 0.2–0.6 GeV energy range to address systematic uncertainties . This will be accomplished using two new facilities: a Low Energy nuSTORM (LEnuSTORM) and a Low Energy Monitored Neutrino Beam (LEMNB). The project also includes the development of a target station prototype, a common near detector, and studies on gadolinium-doped water Cherenkov detectors.
See also
ISIS neutron source – Europe's only pulsed spallation source
J-PARC – The world's most powerful spallation source, located in Japan
MAX IV – synchrotron radiation facility in Lund
Spallation Neutron Source
References
Further reading
S. Peggs et al. ESS Technical Design Report , April 2013.
European Spallation Source. European Spallation Source Activity Report 2015, April 2015.
European Spallation Source. Feature Series: The ESS Instrument Suite, 2014–2015.
Hallonsten, O. 2012. Introduction: In pursuit of a Promise. In O. Hallonsten (ed.) In pursuit of a Promise: Perspectives on the political process to establish the European Spallation Source (ESS) in Lund, Sweden (pp. 11–19). Lund: Arkiv Academic Press, 2012, p. 12.
Prolingheuer, N.; Herbst, M.; Heuel-Fabianek, B.; Moormann, R.; Nabbi, R.; Schlögl, B., Vanderborght, J. 2009: Estimating Dose Rates from Activated Groundwater at Accelerator Sites. Nuclear Technology, Vol. 168, No. 3, December 2009, pp. 924–930.
Heuel-Fabianek, B. 2014: Partition Coefficients (Kd) for the Modelling of Transport Processes of Radionuclides in Groundwater (PDF; 9,4 MB) JÜL-Berichte, Forschungszentrum Jülich, Nr. 4375, 2014, ISSN 0944-2952.
T. Parker. ESS Environmental Design Report, January 2013.
External links
European Spallation Source website. The most up-to-date source for information on the ESS project.
Weekly updates of the construction of ESS and live webcams at the construction site.
essworkshop.org – See how the design of instrumentation for a future ESS-Scandinavia is moving forward.
BrightnESS, EU grant project in support of ESS.
SREss, EU grant project in support of ESS.
Buildings and structures in Lund
Lund University
Neutron scattering
Nuclear physics
Neutron facilities
Particle physics facilities
2019 in Sweden
Particle accelerators | European Spallation Source | [
"Physics",
"Chemistry"
] | 2,725 | [
"Scattering",
"Neutron scattering",
"Nuclear physics"
] |
14,776,609 | https://en.wikipedia.org/wiki/HOXB1 | Homeobox protein Hox-B1 is a protein that in humans is encoded by the HOXB1 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, located on different chromosomes, consisting of 9 to 11 genes arranged in tandem. This gene is one of several homeobox HOXB genes located in a cluster on chromosome 17.
Interactions
HOXB1 has been shown to interact with PBX1.
See also
Tandemly arrayed genes
References
Further reading
External links
Transcription factors | HOXB1 | [
"Chemistry",
"Biology"
] | 160 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,776,622 | https://en.wikipedia.org/wiki/HOXB2 | Homeobox protein Hox-B2 is a protein that in humans is encoded by the HOXB2 gene.
Function
This gene is a member of the Antp homeobox family and encodes a nuclear protein with a homeobox DNA-binding domain. It is included in a cluster of homeobox B genes located on chromosome 17. The encoded protein functions as a sequence-specific transcription factor that is involved in development. Increased expression of this gene is associated with pancreatic cancer.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXB2 | [
"Chemistry",
"Biology"
] | 116 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,776,902 | https://en.wikipedia.org/wiki/HSF2 | Heat shock factor protein 2 is a protein that in humans is encoded by the HSF2 gene.
Function
HSF2, as well as the related gene HSF1, encodes a protein that binds specifically to the heat-shock element and has homology to HSFs of other species. Heat shock transcription factors activate heat-shock response genes under conditions of heat or other stresses. Although the names HSF1 and HSF2 were chosen for historical reasons, these peptides should be referred to as heat-shock transcription factors.
Interactions
HSF2 has been shown to interact with Nucleoporin 62 and HSF1.
See also
Heat shock factor
References
Further reading
External links
Transcription factors | HSF2 | [
"Chemistry",
"Biology"
] | 141 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,777,567 | https://en.wikipedia.org/wiki/Ethenone | Ethenone is the formal name for ketene, an organic compound with formula or . It is the simplest member of the ketene class. It is an important reagent for acetylations.
Properties
Ethenone is a highly reactive gas (at standard conditions) and has a sharp irritating odour. It is only reasonably stable at low temperatures (−80 °C). It must therefore always be prepared for each use and processed immediately, otherwise a dimerization to diketene occurs or it reacts to polymers that are difficult to handle. The polymer content formed during the preparation is reduced, for example, by adding sulfur dioxide to the ketene gas. Because of its cumulative double bonds, ethenone is highly reactive and reacts in an addition reaction H-acidic compounds to the corresponding acetic acid derivatives. It does for example react with water to acetic acid or with primary or secondary amines to the corresponding acetamides.
Preparation
Ethenone is produced by thermal dehydration of acetic acid at 700–750 °C in the presence of triethyl phosphate as a catalyst:
It has also been produced on a laboratory scale by the thermolysis of acetone at .
This reaction is called the Schmidlin ketene synthesis.
On a laboratory scale it can be produced by the thermal decomposition of Meldrum's acid at temperatures greater than 200 °C.
History
Ethenone was first produced in 1907 by N. T. M. Wilsmore through pyrolysis of acetone or acetic anhydride vapours over a hot platinum wire in an apparatus that was later developed by Charles D. Hurd into the "Hurd lamp" or "ketene lamp". This apparatus consists of a heated flask of acetone producing vapours which are pyrolyzed by a metal filament electrically heated to red heat, with a condenser to return unreacted acetone to the boiling flask. Other heating methods have been used and similar methods were used on a larger scale for the industrial production of ketene for acetic anhydride synthesis.
Ethenone was discovered at the same time by Hermann Staudinger (by reaction of bromoacetyl bromide with metallic zinc) The dehydration of acetic acid was reported in 1910.
The thermal decomposition of acetic anhydride was also described.
Natural occurrence
Ethenone has been observed to occur in space, in comets or in gas as part of the interstellar medium.
Use
Ethenone is used to make acetic anhydride from acetic acid. Generally it is used for the acetylation of chemical compounds.
Ethenone reacts with methanal in the presence of catalysts such as Lewis acids (AlCl3, ZnCl2 or BF3) to give β-propiolactone. The technically most significant use of ethenone is the synthesis of sorbic acid by reaction with 2-butenal (crotonaldehyde) in toluene at about 50 °C in the presence of zinc salts of long-chain carboxylic acids. This produces a polyester of 3-hydroxy-4-hexenoic acid, which is thermally or hydrolytically depolymerized to sorbic acid.
Ethenone is very reactive, tending to react with nucleophiles to form an acetyl group. For example, it reacts with water to form acetic acid; with acetic acid to form acetic anhydride; with ammonia and amines to form ethanamides; and with dry hydrogen halides to form acetyl halides.
The formation of acetic acid likely occurs by an initial formation of 1,1-dihydroxyethene, which then tautomerizes to give the final product.
Ethenone will also react with itself via [2 + 2] photocycloadditions to form cyclic dimers known as diketenes. For this reason, it should not be stored for long periods.
Hazards
Exposure to concentrated levels causes humans to experience irritation of body parts such as the eye, nose, throat and lungs. Extended toxicity testing on mice, rats, guinea pigs and rabbits showed that ten-minute exposures to concentrations of freshly generated ethenone as low as 0.2 mg/liter (116 ppm) may produce a high percentage of deaths in small animals. These findings show ethenone is toxicologically identical to phosgene.
The formation of ketene in the pyrolysis of vitamin E acetate, an additive of some e-liquid products, is one possible mechanism of the reported pulmonary damage caused by electronic cigarette use.
A number of patents describe the catalytic formation of ketene from carboxylic acids and acetates, using a variety of metals or ceramics, some of which are known to occur in e-cigarette devices from patients with e-cigarette or vaping product-use associated lung injury (EVALI).
Occupational exposure limits are set at 0.5 ppm (0.9 mg/m3) over an eight-hour time-weighted average.
An IDLH limit is set at 5 ppm, as this is the lowest concentration productive of a clinically relevant physiologic response in humans.
References
Literature
Tidwell, Thomas T. Ketenes, 2nd edition. John Wiley & Sons, 2006, .
External links
Ketenes
Gases
Pulmonary agents
Acetylating agents | Ethenone | [
"Physics",
"Chemistry"
] | 1,144 | [
"Matter",
"Chemical weapons",
"Phases of matter",
"Functional groups",
"Pulmonary agents",
"Ketenes",
"Acetylating agents",
"Reagents for organic chemistry",
"Statistical mechanics",
"Gases"
] |
14,777,805 | https://en.wikipedia.org/wiki/SALL1 | Sal-like 1 (Drosophila), also known as SALL1, is a protein which in humans is encoded by the SALL1 gene. As the full name suggests, it is one of the human versions of the spalt (sal) gene known in Drosophila.
Function
The protein encoded by this gene is a zinc finger transcriptional repressor and may be part of the NuRD histone deacetylase (HDAC) complex.
Clinical significance
Defects in this gene are a cause of Townes–Brocks syndrome (TBS) as well as branchio-oto-renal syndrome (BOR). Two transcript variants encoding different isoforms have been found for this gene.
Interactions
SALL1 has been shown to interact with TERF1 and UBE2I.
References
External links
GeneReviews/NCBI/NIH/UW entry on Townes-Brocks Syndrome
Further reading
Transcription factors | SALL1 | [
"Chemistry",
"Biology"
] | 193 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,779,081 | https://en.wikipedia.org/wiki/Microthermal%20analysis | Microthermal analysis is a materials characterization technique which combines the thermal analysis principles of differential scanning calorimetry (DSC) with the high spatial resolution of scanning probe microscopy. The instrument consists of a thermal probe, a fine platinum/rhodium alloy wire (5 micro meter in diameter) coated by a sheath of silver (Wollaston wire). The wire is bent into a V-shape, and the silver sheath is etched away to form a fine-pointed tip. The probe acts as both the heater as well as a temperature sensor. The probe is attached to a conventional scanning probe microscope and can be scanned over the sample surface to resolve the thermal behavior of the sample spatially.
This technique has been widely used for localized thermal analysis, where the probe is heated rapidly to avoid thermal diffusion through the sample and the response of the substance in immediate proximity to the tip is measured as a function of temperature. Micro-thermal analysis was launched
commercially in March 1998.
Microthermal analysis has been extended to higher spatial resolution to nanothermal analysis, which uses microfabricated self-heating silicon cantilevers to probe thermomechanical properties of materials with sub-100 nm spatial resolution.
References
External links
Application notes from TA Instruments
Materials science | Microthermal analysis | [
"Physics",
"Materials_science",
"Engineering"
] | 253 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
14,781,912 | https://en.wikipedia.org/wiki/Polymer%20nanocomposite | Polymer nanocomposites (PNC) consist of a polymer or copolymer having nanoparticles or nanofillers dispersed in the polymer matrix. These may be of different shape (e.g., platelets, fibers, spheroids), but at least one dimension must be in the range of 1–50 nm. These PNC's belong to the category of multi-phase systems (MPS, viz. blends, composites, and foams) that consume nearly 95% of plastics production. These systems require controlled mixing/compounding, stabilization of the achieved dispersion, orientation of the dispersed phase, and the compounding strategies for all MPS, including PNC, are similar. Alternatively, polymer can be infiltrated into 1D, 2D, 3D preform creating high content polymer nanocomposites.
Polymer nanoscience is the study and application of nanoscience to polymer-nanoparticle matrices, where nanoparticles are those with at least one dimension of less than 100 nm.
The transition from micro- to nano-particles lead to change in its physical as well as chemical properties. Two of the major factors in this are the increase in the ratio of the surface area to volume, and the size of the particle. The increase in surface area-to-volume ratio, which increases as the particles get smaller, leads to an increasing dominance of the behavior of atoms on the surface area of particle over that of those interior of the particle. This affects the properties of the particles when they are reacting with other particles. Because of the higher surface area of the nano-particles, the interaction with the other particles within the mixture is more and this increases the strength, heat resistance, etc. and many factors do change for the mixture.
An example of a nanopolymer is silicon nanospheres which show quite different characteristics; their size is 40–100 nm and they are much harder than silicon, their hardness being between that of sapphire and diamond.
Polymer nanocomposites can be prepared using sequential infiltration synthesis (SIS), in which inorganic nanomaterials are grown within a polymer substrate via diffusion of vapor-phase precursors into the matrix.
Bio-hybrid polymer nanofibers
Many technical applications of biological objects like proteins, viruses or bacteria such as chromatography, optical information technology, sensorics, catalysis and drug delivery require their immobilization. Carbon nanotubes, gold particles and synthetic polymers are used for this purpose. This immobilization has been achieved predominantly by adsorption or by chemical binding and to a lesser extent by incorporating these objects as guests in host matrices.
In the guest host systems, an ideal method for the immobilization of biological objects and their integration into hierarchical architectures should be structured on a nanoscale to facilitate the interactions of biological nano-objects with their environment.
Due to the large number of natural or synthetic polymers available and the advanced techniques developed to process such systems to nanofibres, rods, tubes etc. make polymers a good platform for the immobilization of biological objects.
Bio-hybrid nanofibres by electrospinning
Polymer fibers are, in general, produced on a technical scale by extrusion, i.e., a polymer melt or a polymer solution is pumped through cylindrical dies and spun/drawn by a take-up device. The resulting fibers have diameters typically on the 10-μm scale or above. To come down in diameter into the range of several hundreds of nanometers or even down to a few nanometers, Electrospinning is today still the leading polymer processing technique available. A strong electric field of the order of 103 V/cm is applied to the polymer solution droplets emerging from a cylindrical die. The electric charges, which are accumulated on the surface of the droplet, cause droplet deformation along the field direction, even though the surface tension counteracts droplet evolution. In supercritical electric fields, the field strength overbears the surface tension and a fluid jet emanates from the droplet tip. The jet is accelerated towards the counter electrode. During this transport phase, the jet is subjected to strong electrically driven circular bending motions that cause a strong elongation and thinning of the jet, a solvent evaporation until, finally, the solid nanofibre is deposited on the counter electrode.
Bio-hybrid polymer nanotubes by wetting
Electro spinning, co-electrospinning, and the template methods based on nanofibres yield nano-objects which are, in principle, infinitively long. For a broad range of applications including catalysis, tissue engineering, and surface modification of implants this infinite length is an advantage. But in some applications like inhalation therapy or systemic drug delivery, a well-defined length is required. The template method to be described in the following has the advantage such that it allows the preparation of nanotubes and nanorods with very high precision. The method is based on the use of well defined porous templates, such as porous aluminum or silicon.
The basic concept of this method is to exploit wetting processes. A polymer melt or solution is brought into contact with the pores located in materials characterized by high energy surfaces such as aluminum or silicon. Wetting sets in and covers the walls of the pores with a thin film with a thickness of the order of a few tens of nanometers.
Gravity does not play a role, as it is obvious from the fact that wetting takes place independent of the orientation of the pores relative to the direction of gravity. The exact process is still not understood theoretically in detail but its known from experiments that low molar mass systems tend to fill the pores completely, whereas polymers of sufficient chain length just cover the walls. This process happens typically within a minute for temperatures about 50 K above the melting temperature or glass transition temperature, even for highly viscous polymers, such as, for instance, polytetrafluoroethylene, and this holds even for pores with an aspect ratio as large as 10,000. The complete filling, on the other hand, takes days. To obtain nanotubes, the polymer/template system is cooled down to room temperature or the solvent is evaporated, yielding pores covered with solid layers. The resulting tubes can be removed by mechanical forces for tubes up to 10 μm in length, i.e., by just drawing them out from the pores or by selectively dissolving the template. The diameter of the nanotubes, the distribution of the diameter, the homogeneity along the tubes, and the lengths can be controlled.
Applications
The nanofibres, nanocomposites, hollow nanofibres, core–shell nanofibres, and nanorods or nanotubes produced have a great potential for a broad range of applications including homogeneous and heterogeneous catalysis, dental restorative materials, sensorics, filter applications, and optoelectronics. Here we will just consider a limited set of applications related to life science.
Tissue engineering
This is mainly concerned with the replacement of tissues which have been destroyed by sickness or accidents or other artificial means. The examples are skin, bone, cartilage, blood vessels and may be even organs. This technique involves providing a scaffold on which cells are added and the scaffold should provide favorable conditions for the growth of the same. Nanofibres have been found to provide very good conditions for the growth of such cells, one of the reasons being that fibrillar structures can be found on many tissues which allow the cells to attach strongly to the fibers and grow along them as shown.
Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites. Potentially, these nanocomposites may be used to create novel, mechanically strong, light weight composite bone implants. The results suggest that mechanical reinforcement is dependent on the nanostructure morphology, defects, dispersion of nanomaterials in the polymer matrix, and the cross-linking density of the polymer. In general, two-dimensional nanostructures can reinforce the polymer better than one-dimensional nanostructures, and inorganic nanomaterials are better reinforcing agents than carbon based nanomaterials.
Delivery from compartmented nanotubes
Nano tubes are also used for carrying drugs in general therapy and in tumor therapy in particular. The role of them is to protect the drugs from destruction in blood stream, to control the delivery with a well-defined release kinetics, and in ideal cases, to provide vector-targeting properties or release mechanism by external or internal stimuli.
Rod or tube-like, rather than nearly spherical, nanocarriers may offer additional advantages in terms of drug delivery systems. Such drug carrier particles possess additional choice of the axial ratio, the curvature, and the "all-sweeping" hydrodynamic-related rotation, and they can be modified chemically at the inner surface, the outer surface, and at the end planes in a very selective way. Nanotubes prepared with a responsive polymer attached to the tube opening allow the control of access to and release from the tube. Furthermore, nanotubes can also be prepared showing a gradient in its chemical composition along the length of the tube.
Compartmented drug release systems were prepared based on nanotubes or nanofibres. Nanotubes and nanofibres, for instance, which contained fluorescent albumin with dog-fluorescein isothiocyanate were prepared as a model drug, as well as super paramagnetic nanoparticles composed of iron oxide or nickel ferrite. The presence of the magnetic nanoparticles allowed, first of all, the guiding of the nanotubes to specific locations in the body by external magnetic fields. Super paramagnetic particles are known to display strong interactions with external magnetic fields leading to large saturation magnetizations. In addition, by using periodically varying magnetic fields, the nanoparticles were heated up to provide, thus, a trigger for drug release. The presence of the model drug was established by fluorescence spectroscopy and the same holds for the analysis of the model drug released from the nanotubes.
Immobilization of proteins
Core shell fibers of nano particles with fluid cores and solid shells can be used to entrap biological objects such as proteins, viruses or bacteria in conditions which do not affect their functions. This effect can be used among others for biosensor applications. For example, Green Fluorescent Protein is immobilized in nanostructured fibres providing large surface areas and short distances for the analyte to approach the sensor protein.
With respect to using such fibers for sensor applications fluorescence of the core shell fibers was found to decay rapidly as the fibers were immersed into a solution containing urea: urea permeates through the wall into the core where it causes
denaturation of the GFP. This simple experiment reveals that core–shell fibers are promising objects for preparing biosensors based on biological objects.
Polymer nanostructured fibers, core–shell fibers, hollow fibers, and nanorods and nanotubes provide a platform for a broad range of applications both in material science as well as in life science. Biological objects of different complexity and synthetic objects carrying specific functions can be incorporated into such nanostructured polymer systems while keeping their specific functions vital. Biosensors, tissue engineering, drug delivery, or enzymatic catalysis is just a few of the possible examples. The incorporation of viruses and bacteria all the way up to microorganism should not really pose a problem and the applications coming from such biohybrid systems should be tremendous.
Engineering applications
Polymer nanocomposites for automotive tire industry
Polymer nanocomposites are important for the automotive tire industry due to the possibility of achieving a higher fuel efficiency by designing polymer nanocomposites with suitable properties.
The most common type of filler particles utilized by the tire industry had traditionally been Carbon black (Cb), produced from the incomplete combustion of coal tar and ethylene. The main reason is that the addition of Cb to rubbers enables the manufacturing of tires of a smaller rolling resistance which accounts for about 4% of the worldwide CO2 emissions from fossil fuels. A decrease in the rolling resistance of the car tires produced worldwide is anticipated to decrease the overall fuel consumption of cars due to the fact that a vehicle with tires of a smaller rolling resistance requires less energy to thrust forward. However, a smaller rolling resistance also leads to a lower wet grip performance, which poses concerns of the passenger's safety.
The problem can be partially solved by replacing Cb with silica, because it enables the production of "green" tires that display both improved wet grip properties as well as a smaller rolling resistance.
The main difference in the relevant properties of Cb and silica is that Cb is hydrophobic (as are the polymers used in the manufacturing of car ties) whereas silica is hydrophilic. So in order to increase the compatibility among the silica fillers and the polymer matrix, the silica is usuallyfunctionalized with coupling agents, which gives the possibility of tuning the filler-polymer interactions and thus producing nanocomposites of specific properties.
Overall, the main unresolved issue on the mechanical properties of filled rubbers is the elucidation of the exact mechanism of their mechanical reinforcement and of the so-called Payne effect; and owing to a lack of suitable theoretical and experimental approaches, both of them are still poorly understood.
Polymer nanocomposites for high temperature applications
Polymer nanocomposites aided with carbon quantum dots have been found to show remarkable heat resistance. These nanocomposites can be used in environments where heat resistance is a requirement.
Size and pressure effects on nanopolymers
The size- and pressure- dependent glass transition temperatures of free-standing films or supported films having weak interactions with substrates decreases with decreasing of pressure and size. However, the glass transition temperature of supported films having strong interaction with substrates increases of pressure and the decrease of size. Different models like two layer model, three layer model, Tg (D, 0) ∝ 1/D and some more models relating specific heat, density and thermal expansion are used to obtain the experimental results on nanopolymers and even some observations like freezing of films due to memory effects in the visco-elastic eigenmodels of the films, and finite effects of the small molecule glass are observed. To describe Tg (D, 0) function of polymers more generally, a simple and unified model recently is provided based on the size-dependent melting temperature of crystals and Lindemann's criterion
where σg is the root of mean squared displacement of surface and interior molecules of glasses at Tg (D, 0), α = σs2 (D, 0) / σv2 (D, 0) with subscripts s and v denoting surface and volume, respectively. For a nanoparticle, D has a usual meaning of diameter, for a nanowire, D is taken as its diameter, and for a thin film, D denotes its thickness. D0 denotes a critical diameter at which all molecules of a low-dimensional glass are located on its surface.
Conclusion
Devices that use the properties of low-dimensional objects, such as nanoparticles, are promising due to the possibility of tailoring a number of mechanical, electrophysical, optical and magnetic properties granting some degree of control over the size of nanoparticles during synthesis. In the case of polymer nanocomposites we can use the properties of disordered systems.
Here recent developments in the field of polymer nano-composites and some of their applications have been reviewed. Though there is much use in this field, there are many limitations also. For example, in the release of drugs using nanofibres, cannot be controlled independently and a burst release is usually the case, whereas a more linear release is required. Let us now consider future aspects in this field.
There is a possibility of building ordered arrays of nanoparticles in the polymer matrix. A number of possibilities also exist to manufacture the nanocomposite circuit boards. An even more attractive method exists to use polymer nanocomposites for neural networks applications. Another promising area of development is optoelectronics and optical computing. The single domain nature and super paramagnetic behavior of nanoparticles containing ferromagnetic metals could be possibly used for magneto-optical storage media manufacturing.
See also
Nanocomposite
Biopolymer
Copolymer
Electroactive polymers
Nanocarbon tubes
References
Nanomaterials
Polymer material properties | Polymer nanocomposite | [
"Chemistry",
"Materials_science"
] | 3,531 | [
"Polymer material properties",
"Nanotechnology",
"Polymer chemistry",
"Nanomaterials"
] |
14,786,510 | https://en.wikipedia.org/wiki/Spinning%20cone | Spinning cone columns are used in a form of low temperature vacuum steam distillation to gently extract volatile chemicals from liquid foodstuffs while minimising the effect on the taste of the product. For instance, the columns can be used to remove some of the alcohol from wine, to remove "off" smells from cream, and to capture aroma compounds that would otherwise be lost in coffee processing.
Mechanism
The columns are made of stainless steel. Conical vanes are attached alternately to the wall of the column and to a central rotating shaft. The product is poured in at the top under vacuum, and steam is pumped into the column from below. The vanes provide a large surface area over which volatile compounds can evaporate into the steam, and the rotation ensures a thin layer of the product is constantly moved over the moving cone. It typically takes 20 seconds for the liquid to move through the column, and industrial columns might process . The temperature and pressure can be adjusted depending on the compounds targeted.
Wine controversy
Improvements in viticulture and warmer vintages have led to increasing levels of sugar in wine grapes, which have translated to higher levels of alcohol - which can reach over 15% ABV in Zinfandels from California. Some producers feel that this unbalances their wine, and use spinning cones to reduce the alcohol by 1-2 percentage points. In this case the wine is passed through the column once to distill out the most volatile aroma compounds which are then put to one side while the wine goes through the column a second time at higher temperature to extract alcohol. The aroma compounds are then mixed back into the wine. Some producers such as Joel Peterson of Ravenswood argue that technological "fixes" such as spinning cones remove a sense of terroir from the wine; if the wine has the tannins and other components to balance 15% alcohol, Peterson argues that it should be accepted on its own terms.
The use of spinning cones, and other technologies such as reverse osmosis, was banned in the EU until recently, although for many years they could freely be used in wines imported into the EU from certain New World wine producing countries such as Australia and the USA. In November 2007, the Wine Standards Branch (WSB) of the UK's Food Standards Agency banned the sale of a wine called Sovio, made from Spanish grapes that would normally produce wines of 14% ABV. Sovio runs 40-50% of the wine over spinning cones to reduce the alcohol content to 8%, which means that under EU law it could not be sold as wine as it was below 8.5%; above that, under the rules prevailing at the time, it would be banned because spinning cones could not be used in EU winemaking.
Subsequently, the EU legalized dealcoholization with a 2% adjustment limit in its Code of Winemaking Practices, publishing that in its Commission Regulation (EC) No 606/2009 and stipulating that the dealcoholization must be accomplished by physical separation techniques which would embrace the spinning cone method.
More recently, in International Organisation of Vine and Wine Resolutions OIV-OENO 394A-2012 and OIV-OENO 394B-2012 of June 22, 2012 EU recommended winemaking procedures were modified to permit use of the spinning cone column and membrane techniques such as reverse osmosis on wine, subject to a limitation on the adjustment. That limitation is currently under review following the proposal by some EU members that it be eliminated altogether. The limitation is applicable only to products formally labeled as "wine".
See also
Winemaking
Distillation
Spinning band distillation
References
Further reading
External links
Flavourtech manufactures spinning cone columns.
Oenology
Wine terminology
Distillation
Chemical equipment
Separation processes | Spinning cone | [
"Chemistry",
"Engineering"
] | 765 | [
"Chemical equipment",
"Distillation",
"nan",
"Separation processes"
] |
14,787,365 | https://en.wikipedia.org/wiki/Glass%20batch%20calculation | Glass batch calculation or glass batching is used to determine the correct mix of raw materials (batch) for a glass melt.
Principle
The raw materials mixture for glass melting is termed "batch". The batch must be measured properly to achieve a given, desired glass formulation. This batch calculation is based on the common linear regression equation:
with NB and NG being the molarities 1-column matrices of the batch and glass components respectively, and B being the batching matrix. The symbol "T" stands for the matrix transpose operation, "−1" indicates matrix inversion, and the sign "·" means the scalar product. From the molarities matrices N, percentages by weight (wt%) can easily be derived using the appropriate molar masses.
Example calculation
An example batch calculation may be demonstrated here. The desired glass composition in wt% is: 67 SiO2, 12 Na2O, 10 CaO, 5 Al2O3, 1 K2O, 2 MgO, 3 B2O3, and as raw materials are used sand, trona, lime, albite, orthoclase, dolomite, and borax. The formulas and molar masses of the glass and batch components are listed in the following table:
The batching matrix B indicates the relation of the molarity in the batch (columns) and in the glass (rows). For example, the batch component SiO2 adds 1 mol SiO2 to the glass, therefore, the intersection of the first column and row shows "1". Trona adds 1.5 mol Na2O to the glass; albite adds 6 mol SiO2, 1 mol Na2O, and 1 mol Al2O3, and so on. For the example given above, the complete batching matrix is listed below. The molarity matrix NG of the glass is simply determined by dividing the desired wt% concentrations by the appropriate molar masses, e.g., for SiO2 67/60.0843 = 1.1151.
The resulting molarity matrix of the batch, NB, is given here. After multiplication with the appropriate molar masses of the batch ingredients one obtains the batch mass fraction matrix MB:
or
The matrix MB, normalized to sum up to 100% as seen above, contains the final batch composition in wt%: 39.216 sand, 16.012 trona, 10.242 lime, 16.022 albite, 4.699 orthoclase, 7.276 dolomite, 6.533 borax. If this batch is melted to a glass, the desired composition given above is obtained. During glass melting, carbon dioxide (from trona, lime, dolomite) and water (from trona, borax) evaporate.
Simple glass batch calculation can be found at the website of the University of Washington.
Advanced batch calculation by optimization
If the number of glass and batch components is not equal, if it is impossible to exactly obtain the desired glass composition using the selected batch ingredients, or if the matrix equation is not soluble for other reasons (i.e., the rows/columns are linearly dependent), the batch composition must be determined by optimization techniques.
See also
Glass ingredients
Calculation of glass properties
References
Glass engineering and science
Glass chemistry | Glass batch calculation | [
"Chemistry",
"Materials_science",
"Engineering"
] | 701 | [
"Glass engineering and science",
"Glass chemistry",
"Materials science"
] |
62,382 | https://en.wikipedia.org/wiki/Catalan%27s%20conjecture | Catalan's conjecture (or Mihăilescu's theorem) is a theorem in number theory that was conjectured by the mathematician Eugène Charles Catalan in 1844 and proven in 2002 by Preda Mihăilescu at Paderborn University. The integers 23 and 32 are two perfect powers (that is, powers of exponent higher than one) of natural numbers whose values (8 and 9, respectively) are consecutive. The theorem states that this is the only case of two consecutive perfect powers. That is to say, that
History
The history of the problem dates back at least to Gersonides, who proved a special case of the conjecture in 1343 where (x, y) was restricted to be (2, 3) or (3, 2). The first significant progress after Catalan made his conjecture came in 1850 when Victor-Amédée Lebesgue dealt with the case b = 2.
In 1976, Robert Tijdeman applied Baker's method in transcendence theory to establish a bound on a,b and used existing results bounding x,y in terms of a, b to give an effective upper bound for x,y,a,b. Michel Langevin computed a value of for the bound, resolving Catalan's conjecture for all but a finite number of cases.
Catalan's conjecture was proven by Preda Mihăilescu in April 2002. The proof was published in the Journal für die reine und angewandte Mathematik, 2004. It makes extensive use of the theory of cyclotomic fields and Galois modules. An exposition of the proof was given by Yuri Bilu in the Séminaire Bourbaki. In 2005, Mihăilescu published a simplified proof.
Pillai's conjecture
Pillai's conjecture concerns a general difference of perfect powers : it is an open problem initially proposed by S. S. Pillai, who conjectured that the gaps in the sequence of perfect powers tend to infinity. This is equivalent to saying that each positive integer occurs only finitely many times as a difference of perfect powers: more generally, in 1931 Pillai conjectured that for fixed positive integers A, B, C the equation has only finitely many solutions (x, y, m, n) with (m, n) ≠ (2, 2). Pillai proved that for fixed A, B, x, y, and for any λ less than 1, we have uniformly in m and n.
The general conjecture would follow from the ABC conjecture.
Pillai's conjecture means that for every natural number n, there are only finitely many pairs of perfect powers with difference n. The list below shows, for n ≤ 64, all solutions for perfect powers less than 1018, such that the exponent of both powers is greater than 1. The number of such solutions for each n is listed at . See also for the smallest solution (> 0).
See also
Beal's conjecture
Equation xy = yx
Fermat–Catalan conjecture
Mordell curve
Ramanujan–Nagell equation
Størmer's theorem
Tijdeman's theorem
Thaine's theorem
Notes
References
Predates Mihăilescu's proof.
External links
Ivars Peterson's MathTrek
On difference of perfect powers
Jeanine Daems: A Cyclotomic Proof of Catalan's Conjecture
Conjectures
Conjectures that have been proved
Diophantine equations
Theorems in number theory
Abc conjecture | Catalan's conjecture | [
"Mathematics"
] | 712 | [
"Unsolved problems in mathematics",
"Mathematical objects",
"Equations",
"Theorems in number theory",
"Diophantine equations",
"Conjectures",
"Conjectures that have been proved",
"Mathematical problems",
"Abc conjecture",
"Mathematical theorems",
"Number theory"
] |
62,529 | https://en.wikipedia.org/wiki/Sea%20level | Mean sea level (MSL, often shortened to sea level) is an average surface level of one or more among Earth's coastal bodies of water from which heights such as elevation may be measured. The global MSL is a type of vertical datuma standardised geodetic datumthat is used, for example, as a chart datum in cartography and marine navigation, or, in aviation, as the standard sea level at which atmospheric pressure is measured to calibrate altitude and, consequently, aircraft flight levels. A common and relatively straightforward mean sea-level standard is instead a long-term average of tide gauge readings at a particular reference location.
The term above sea level generally refers to the height above mean sea level (AMSL). The term APSL means above present sea level, comparing sea levels in the past with the level today.
Earth's radius at sea level is 6,378.137 km (3,963.191 mi) at the equator. It is 6,356.752 km (3,949.903 mi) at the poles and 6,371.001 km (3,958.756 mi) on average. This flattened spheroid, combined with local gravity anomalies, defines the geoid of the Earth, which approximates the local mean sea level for locations in the open ocean. The geoid includes a significant depression in the Indian Ocean, whose surface dips as much as below the global mean sea level (excluding minor effects such as tides and currents).
Measurement
Precise determination of a "mean sea level" is difficult because of the many factors that affect sea level. Instantaneous sea level varies substantially on several scales of time and space. This is because the sea is in constant motion, affected by the tides, wind, atmospheric pressure, local gravitational differences, temperature, salinity, and so forth. The mean sea level at a particular location may be calculated over an extended time period and used as a datum. For example, hourly measurements may be averaged over a full Metonic 19-year lunar cycle to determine the mean sea level at an official tide gauge.
Still-water level or still-water sea level (SWL) is the level of the sea with motions such as wind waves averaged out.
Then MSL implies the SWL further averaged over a period of time such that changes due to, e.g., the tides, also have zero mean.
Global MSL refers to a spatial average over the entire ocean area, typically using large sets of tide gauges and/or satellite measurements.
One often measures the values of MSL with respect to the land; hence a change in relative MSL or (relative sea level) can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates, or both.
In the UK, the ordnance datum (the 0 metres height on UK maps) is the mean sea level measured at Newlyn in Cornwall between 1915 and 1921. Before 1921, the vertical datum was MSL at the Victoria Dock, Liverpool.
Since the times of the Russian Empire, in Russia and its other former parts, now independent states, the sea level is measured from the zero level of Kronstadt Sea-Gauge.
In Hong Kong, "mPD" is a surveying term meaning "metres above Principal Datum" and refers to height of above chart datum and below the average sea level.
In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and offers the longest collated data about the sea level. It is used for a part of continental Europe and the main part of Africa as the official sea level. Spain uses the reference to measure heights below or above sea level at Alicante, while the European Vertical Reference System is calibrated to the Amsterdam Peil elevation, which dates back to the 1690s.
Satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001 and the Ocean Surface Topography Mission on the Jason-2 satellite in 2008.
Height above mean sea level
Height above mean sea level (AMSL) is the elevation (on the ground) or altitude (in the air) of an object, relative to a reference datum for mean sea level (MSL). It is also used in aviation, where some heights are recorded and reported with respect to mean sea level (contrast with flight level), and in the atmospheric sciences, and in land surveying. An alternative is to base height measurements on a reference ellipsoid approximating the entire Earth, which is what systems such as GPS do. In aviation, the reference ellipsoid known as WGS84 is increasingly used to define heights; however, differences up to exist between this ellipsoid height and local mean sea level. Another alternative is to use a geoid-based vertical datum such as NAVD88 and the global EGM96 (part of WGS84). Details vary in different countries.
When referring to geographic features such as mountains, on a topographic map variations in elevation are shown by contour lines. A mountain's highest point or summit is typically illustrated with the AMSL height in metres, feet or both. In unusual cases where a land location is below sea level, such as Death Valley, California, the elevation AMSL is negative.
Difficulties in use
It is often necessary to compare the local height of the mean sea surface with a "level" reference surface, or geodetic datum, called the geoid. In the absence of external forces, the local mean sea level would coincide with this geoid surface, being an equipotential surface of the Earth's gravitational field which, in itself, does not conform to a simple sphere or ellipsoid and exhibits gravity anomalies such as those measured by NASA's GRACE satellites. In reality, the geoid surface is not directly observed, even as a long-term average, due to ocean currents, air pressure variations, temperature and salinity variations, etc. The location-dependent but time-persistent separation between local mean sea level and the geoid is referred to as (mean) ocean surface topography. It varies globally in a typical range of ±.
Dry land
Several terms are used to describe the changing relationships between sea level and dry land.
"relative" means change relative to a fixed point in the sediment pile.
"eustatic" refers to global changes in sea level relative to a fixed point, such as the centre of the earth, for example as a result of melting ice-caps.
"steric" refers to global changes in sea level due to thermal expansion and salinity variations.
"isostatic" refers to changes in the level of the land relative to a fixed point in the earth, possibly due to thermal buoyancy or tectonic effects, disregarding changes in the volume of water in the oceans.
The melting of glaciers at the end of ice ages results in isostatic post-glacial rebound, when land rises after the weight of ice is removed. Conversely, older volcanic islands experience relative sea level rise, due to isostatic subsidence from the weight of cooling volcanos. The subsidence of land due to the withdrawal of groundwater is another isostatic cause of relative sea level rise.
On planets that lack a liquid ocean, planetologists can calculate a "mean altitude" by averaging the heights of all points on the surface. This altitude, sometimes referred to as a "sea level" or zero-level elevation, serves equivalently as a reference for the height of planetary features.
Change
Local and eustatic
Local mean sea level (LMSL) is defined as the height of the sea with respect to a land benchmark, averaged over a period of time long enough that fluctuations caused by waves and tides are smoothed out, typically a year or more. One must adjust perceived changes in LMSL to account for vertical movements of the land, which can occur at rates similar to sea level changes (millimetres per year).
Some land movements occur because of isostatic adjustment to the melting of ice sheets at the end of the last ice age. The weight of the ice sheet depresses the underlying land, and when the ice melts away the land slowly rebounds. Changes in ground-based ice volume also affect local and regional sea levels by the readjustment of the geoid and true polar wander. Atmospheric pressure, ocean currents and local ocean temperature changes can affect LMSL as well.
Eustatic sea level change (global as opposed to local change) is due to change in either the volume of water in the world's oceans or the volume of the oceanic basins. Two major mechanisms are currently causing eustatic sea level rise. First, shrinking land ice, such as mountain glaciers and polar ice sheets, is releasing water into the oceans. Second, as ocean temperatures rise, the warmer water expands.
Short-term and periodic changes
Many factors can produce short-term changes in sea level, typically within a few metres, in timeframes ranging from minutes to months:
Recent changes
Aviation
Pilots can estimate height above sea level with an altimeter set to a defined barometric pressure. Generally, the pressure used to set the altimeter is the barometric pressure that would exist at MSL in the region being flown over. This pressure is referred to as either QNH or "altimeter" and is transmitted to the pilot by radio from air traffic control (ATC) or an automatic terminal information service (ATIS). Since the terrain elevation is also referenced to MSL, the pilot can estimate height above ground by subtracting the terrain altitude from the altimeter reading. Aviation charts are divided into boxes and the maximum terrain altitude from MSL in each box is clearly indicated. Once above the transition altitude, the altimeter is set to the international standard atmosphere (ISA) pressure at MSL which is 1013.25 hPa or 29.92 inHg.
See also
(UK and Ireland)
References
External links
Sea Level Rise:Understanding the past – Improving projections for the future
Permanent Service for Mean Sea Level
Global sea level change: Determination and interpretation
Environment Protection Agency Sea level rise reports
Properties of isostasy and eustasy
Measuring Sea Level from Space
Rising Tide Video: Scripps Institution of Oceanography
Sea Levels Online: National Ocean Service (CO-OPS)
Système d'Observation du Niveau des Eaux Littorales (SONEL)
Sea level rise – How much and how fast will sea level rise over the coming centuries?
Geodesy
Physical oceanography
Oceanographical terminology
Vertical datums | Sea level | [
"Physics",
"Mathematics"
] | 2,204 | [
"Applied mathematics",
"Geodesy",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
62,641 | https://en.wikipedia.org/wiki/Vector%20field | In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space . A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.
The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow).
A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space.
Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other.
Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector).
More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field.
Definition
Vector fields on subsets of Euclidean space
Given a subset of , a vector field is represented by a vector-valued function in standard Cartesian coordinates . If each component of is continuous, then is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an n-dimensional space.
One standard notation is to write for the unit vectors in the coordinate directions. In these terms, every smooth vector field on an open subset of can be written as
for some smooth functions on . The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, , given by differentiating in the direction of the vector field.
Example: The vector field describes a counterclockwise rotation around the origin in . To show that the function is rotationally invariant, compute:
Given vector fields , defined on and a smooth function defined on , the operations of scalar multiplication and vector addition,
make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise.
Coordinate transformation law
In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector.
Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector are
and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law
Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law () relating the different coordinate systems.
Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes.
Vector fields on manifolds
Given a differentiable manifold , a vector field on is an assignment of a tangent vector to each point in . More precisely, a vector field is a mapping from into the tangent bundle so that is the identity mapping
where denotes the projection from to . In other words, a vector field is a section of the tangent bundle.
An alternative definition: A smooth vector field on a manifold is a linear map such that is a derivation: for all .
If the manifold is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold is often denoted by or (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by (a fraktur "X").
Examples
A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas.
Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid.
Streamlines, streaklines and pathlines are 3 types of lines that can be made from (time-dependent) vector fields. They are:
streaklines: the line produced by particles passing through a specific fixed point over various times
pathlines: showing the path that a given particle (of zero mass) would follow.
streamlines (or fieldlines): the path of a particle influenced by the instantaneous field (i.e., the path of a particle if the field is held fixed).
Magnetic fields. The fieldlines can be revealed using small iron filings.
Maxwell's equations allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electric field.
A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases.
Gradient field in Euclidean spaces
Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇).
A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that
The associated flow is called the , and is used in the method of gradient descent.
The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero:
Central field in euclidean spaces
A -vector field over is called a central field if
where is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0.
The point 0 is called the center of the field.
Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient.
Operations on vector fields
Line integral
A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve.
The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous.
Given a vector field and a curve , parametrized by in (where and are real numbers), the line integral is defined as
To show vector field topology one can use line integral convolution.
Divergence
The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by
with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem.
The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors.
Curl in three dimensions
The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by
The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem.
Index of a vector field
The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity.
Let n be the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere Sn−1. This defines a continuous map from S to Sn−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself.
The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)k around a saddle that has k contracting dimensions and n−k expanding dimensions.
The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes.
For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem.
For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic.
Physical intuition
Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory.
In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field.
In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm.
Flow curves
Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity.
Given a vector field defined on , one defines curves on such that for each in an interval ,
By the Picard–Lindelöf theorem, if is Lipschitz continuous there is a unique -curve for each point in so that, for some ,
The curves are called integral curves or trajectories (or less commonly, flow lines) of the vector field and partition into equivalence classes. It is not always possible to extend the interval to the whole real number line. The flow may for example reach the edge of in a finite time.
In two or three dimensions one can visualize the vector field as giving rise to a flow on . If we drop a particle into this flow at a point it will move along the curve in the flow depending on the initial point . If is a stationary point of (i.e., the vector field is equal to the zero vector at the point ), then the particle will remain at .
Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups.
Complete vector fields
By definition, a vector field on is called complete if each of its flow curves exists for all time. In particular, compactly supported vector fields on a manifold are complete. If is a complete vector field on , then the one-parameter group of diffeomorphisms generated by the flow along exists for all time; it is described by a smooth mapping
On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field on the real line is given by . For, the differential equation , with initial condition , has as its unique solution if (and for all if ). Hence for , is undefined at so cannot be defined for all values of .
The Lie bracket
The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions :
f-relatedness
Given a smooth function between manifolds, , the derivative is an induced map on tangent bundles, . Given vector fields and , we say that is -related to if the equation holds.
If is -related to , , then the Lie bracket is -related to .
Generalizations
Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields.
Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras.
See also
Eisenbud–Levine–Khimshiashvili signature formula
Field line
Field strength
Gradient flow and balanced flow in atmospheric dynamics
Lie derivative
Scalar field
Time-dependent vector field
Vector fields in cylindrical and spherical coordinates
Tensor fields
Slope field
References
Bibliography
External links
Online Vector Field Editor
Vector field — Mathworld
Vector field — PlanetMath
3D Magnetic field viewer
Vector fields and field lines
Vector field simulation An interactive application to show the effects of vector fields
Differential topology
Field
Functions and mappings
F | Vector field | [
"Physics",
"Mathematics"
] | 3,473 | [
"Mathematical analysis",
"Functions and mappings",
"Physical quantities",
"Quantity",
"Mathematical objects",
"Topology",
"Differential topology",
"Vector physical quantities",
"Mathematical relations"
] |
62,929 | https://en.wikipedia.org/wiki/Aqua%20regia | Aqua regia (; from Latin, "regal water" or "royal water") is a mixture of nitric acid and hydrochloric acid, optimally in a molar ratio of 1:3. Aqua regia is a fuming liquid. Freshly prepared aqua regia is colorless, but it turns yellow, orange or red within seconds from the formation of nitrosyl chloride and nitrogen dioxide. It was so named by alchemists because it can dissolve noble metals like gold and platinum, though not all metals.
Preparation and decomposition
Upon mixing of concentrated hydrochloric acid and concentrated nitric acid, chemical reactions occur. These reactions result in the volatile products nitrosyl chloride and chlorine gas:
as evidenced by the fuming nature and characteristic yellow color of aqua regia. As the volatile products escape from solution, aqua regia loses its potency. Nitrosyl chloride (NOCl) can further decompose into nitric oxide (NO) and elemental chlorine ():
This dissociation is equilibrium-limited. Therefore, in addition to nitrosyl chloride and chlorine, the fumes over aqua regia also contain nitric oxide (NO). Because nitric oxide readily reacts with atmospheric oxygen, the gases produced also contain nitrogen dioxide, (red fume):
Applications
Aqua regia is primarily used to produce chloroauric acid, the electrolyte in the Wohlwill process for refining the highest purity (99.999%) gold.
Aqua regia is also used in etching and in specific analytic procedures. It is also used in some laboratories to clean glassware of organic compounds and metal particles. This method is preferred among most over the more traditional chromic acid bath for cleaning NMR tubes, because no traces of paramagnetic chromium can remain to spoil spectra. While chromic acid baths are discouraged because of the high toxicity of chromium and the potential for explosions, aqua regia is itself very corrosive and has been implicated in several explosions due to mishandling.
Because its components react quickly, resulting in its decomposition, aqua regia quickly loses its effectiveness (yet remains a strong acid), so its components are usually only mixed immediately before use.
Chemistry
Dissolving gold
Aqua regia dissolves gold, although neither constituent acid will do so alone. Nitric acid is a powerful oxidizer, which will dissolve a very small quantity of gold, forming gold(III) ions (). The hydrochloric acid provides a ready supply of chloride ions (), which react with the gold ions to produce tetrachloroaurate(III) anions (), also in solution. The reaction with hydrochloric acid is an equilibrium reaction that favors formation of tetrachloroaurate(III) anions. This results in a removal of gold ions from solution and allows further oxidation of gold to take place. The gold dissolves to become chloroauric acid. In addition, gold may be dissolved by the chlorine present in aqua regia. Appropriate equations are:
Au + 3 + 4 HCl + 3 + + 2
or
Au + + 4 HCl + NO + + .
Solid tetrachloroauric acid may be isolated by evaporating the excess aqua regia, and decomposing the residual nitric acid by repeatedly heating the solution with additional hydrochloric acid. That step reduces nitric acid (see decomposition of aqua regia). If elemental gold is desired, it may be selectively reduced with reducing agents such as sulfur dioxide, hydrazine, oxalic acid, etc. The equation for the reduction of oxidized gold () by sulfur dioxide () is the following:
Dissolving platinum
Similar equations can be written for platinum. As with gold, the oxidation reaction can be written with either nitric oxide or nitrogen dioxide as the nitrogen oxide product:
The oxidized platinum ion then reacts with chloride ions resulting in the chloroplatinate ion:
Experimental evidence reveals that the reaction of platinum with aqua regia is considerably more complex. The initial reactions produce a mixture of chloroplatinous acid () and nitrosoplatinic chloride (). The nitrosoplatinic chloride is a solid product. If full dissolution of the platinum is desired, repeated extractions of the residual solids with concentrated hydrochloric acid must be performed:
and
The chloroplatinous acid can be oxidized to chloroplatinic acid by saturating the solution with molecular chlorine () while heating:
Dissolving platinum solids in aqua regia was the mode of discovery for the densest metals, iridium and osmium, both of which are found in platinum ores and are not dissolved by aqua regia, instead collecting as insoluble metallic powder (elemental Ir, Os) on the base of the vessel.
Precipitating dissolved platinum
As a practical matter, when platinum group metals are purified through dissolution in aqua regia, gold (commonly associated with PGMs) is precipitated by treatment with iron(II) chloride. Platinum in the filtrate, as hexachloroplatinate(IV), is converted to ammonium hexachloroplatinate by the addition of ammonium chloride. This ammonium salt is extremely insoluble, and it can be filtered off. Ignition (strong heating) converts it to platinum metal:
Unprecipitated hexachloroplatinate(IV) is reduced with elemental zinc, and a similar method is suitable for small scale recovery of platinum from laboratory residues.
Reaction with tin
Aqua regia reacts with tin to form tin(IV) chloride, containing tin in its highest oxidation state:
Reaction with other substances
It can react with iron pyrite to form Iron(III) chloride:
History
Aqua regia first appeared in the De inventione veritatis ("On the Discovery of Truth") by pseudo-Geber (after ), who produced it by adding sal ammoniac (ammonium chloride) to nitric acid. The preparation of aqua regia by directly mixing hydrochloric acid with nitric acid only became possible after the discovery in the late sixteenth century of the process by which free hydrochloric acid can be produced.
The third of Basil Valentine's keys () shows a dragon in the foreground and a fox eating a rooster in the background. The rooster symbolizes gold (from its association with sunrise and the sun's association with gold), and the fox represents aqua regia. The repetitive dissolving, heating, and redissolving (the rooster eating the fox eating the rooster) leads to the buildup of chlorine gas in the flask. The gold then crystallizes in the form of gold(III) chloride, whose red crystals Basil called "the rose of our masters" and "the red dragon's blood". The reaction was not reported again in the chemical literature until 1895.
Antoine Lavoisier called aqua regia nitro-muriatic acid in 1789.
When Germany invaded Denmark in World War II, Hungarian chemist George de Hevesy dissolved the gold Nobel Prizes of German physicists Max von Laue (1914) and James Franck (1925) in aqua regia to prevent the Nazis from confiscating them. The German government had prohibited Germans from accepting or keeping any Nobel Prize after jailed peace activist Carl von Ossietzky had received the Nobel Peace Prize in 1935. De Hevesy placed the resulting solution on a shelf in his laboratory at the Niels Bohr Institute. It was subsequently ignored by the Nazis who thought the jar—one of perhaps hundreds on the shelving—contained common chemicals. After the war, de Hevesy returned to find the solution undisturbed and precipitated the gold out of the acid. The gold was returned to the Royal Swedish Academy of Sciences and the Nobel Foundation. They re-cast the medals and again presented them to Laue and Franck.
See also
sometimes also used to clean glassware
Notes
References
External links
Chemistry Comes Alive! Aqua Regia
Aqua Regia at The Periodic Table of Videos (University of Nottingham)
Demonstration of Gold Coin Dissolving in Acid (Aqua Regia)
Gold
Alchemical substances
Oxidizing mixtures
Oxidizing acids
Mineral acids | Aqua regia | [
"Chemistry"
] | 1,736 | [
"Acids",
"Inorganic compounds",
"Mineral acids",
"Alchemical substances",
"Oxidizing agents",
"Oxidizing mixtures",
"Oxidizing acids"
] |
63,011 | https://en.wikipedia.org/wiki/Nocturnality | Nocturnality is a behavior in some non-human animals characterized by being active during the night and sleeping during the day. The common adjective is "nocturnal", versus diurnal meaning the opposite.
Nocturnal creatures generally have highly developed senses of hearing, smell, and specially adapted eyesight. Some animals, such as cats and ferrets, have eyes that can adapt to both low-level and bright day levels of illumination (see metaturnal). Others, such as bushbabies and (some) bats, can function only at night. Many nocturnal creatures including tarsiers and some owls have large eyes in comparison with their body size to compensate for the lower light levels at night. More specifically, they have been found to have a larger cornea relative to their eye size than diurnal creatures to increase their : in the low-light conditions. Nocturnality helps wasps, such as Apoica flavissima, avoid hunting in intense sunlight.
Diurnal animals, including humans (except for night owls), squirrels and songbirds, are active during the daytime. Crepuscular species, such as rabbits, skunks, tigers and hyenas, are often erroneously referred to as nocturnal. Cathemeral species, such as fossas and lions, are active both in the day and at night.
Origins
While it is difficult to say which came first, nocturnality or diurnality, a hypothesis in evolutionary biology, the nocturnal bottleneck theory, postulates that in the Mesozoic, many ancestors of modern-day mammals evolved nocturnal characteristics in order to avoid contact with the numerous diurnal predators. A recent study attempts to answer the question as to why so many modern day mammals retain these nocturnal characteristics even though they are not active at night. The leading answer is that the high visual acuity that comes with diurnal characteristics is not needed anymore due to the evolution of compensatory sensory systems, such as a heightened sense of smell and more astute auditory systems. In a recent study, recently extinct elephant birds and modern day nocturnal kiwi bird skulls were examined to recreate their likely brain and skull formation. They indicated that olfactory bulbs were much larger in comparison to their optic lobes, indicating they both have a common ancestor who evolved to function as a nocturnal species, decreasing their eyesight in favor of a better sense of smell. The anomaly to this theory were anthropoids, who appeared to have the most divergence from nocturnality of all organisms examined. While most mammals did not exhibit the morphological characteristics expected of a nocturnal creature, reptiles and birds fit in perfectly. A larger cornea and pupil correlated well with whether these two classes of organisms were nocturnal or not.
Advantages
Resource competition
Being active at night is a form of niche differentiation, where a species' niche is partitioned not by the amount of resources but by the amount of time (i.e. temporal division of the ecological niche). Hawks and owls can hunt the same field or meadow for the same rodents without conflict because hawks are diurnal and owls are nocturnal. This means they are not in competition for each other's prey. Another niche that being nocturnal lessens competition within is pollination - nocturnal pollinators such as moths, beetles, thrips, and bats have a lower risk of being seen by predators, and the plants evolved temporal scent production and ambient heat to attract nocturnal pollination. Like with predators hunting the same prey, some plants such as apples can be pollinated both during the day and at night.
Predation
Nocturnality is a form of crypsis, an adaptation to avoid or enhance predation. Although lions are cathemeral, and may be active at any time of day or night, they prefer to hunt at night because many of their prey species (zebra, antelope, impala, wildebeest, etc.) have poor night vision. Many species of small rodents, such as the Large Japanese Field Mouse, are active at night because most of the dozen or so birds of prey that hunt them are diurnal. There are many diurnal species that exhibit some nocturnal behaviors. For example, many seabirds and sea turtles only gather at breeding sites or colonies at night to reduce the risk of predation to themselves and/or their offspring. Nocturnal species take advantage of the night time to prey on species that are used to avoiding diurnal predators. Some nocturnal fish species will use the moonlight to prey on zooplankton species that come to the surface at night. Some species have developed unique adaptations that allow them to hunt in the dark. Bats are famous for using echolocation to hunt down their prey, using sonar sounds to capture them in the dark.
Water conservation
Another reason for nocturnality is avoiding the heat of the day. This is especially true in arid biomes like deserts, where nocturnal behavior prevents creatures from losing precious water during the hot, dry daytime. This is an adaptation that enhances osmoregulation. One of the reasons that (cathemeral) lions prefer to hunt at night is to conserve water. Hamiltons Frog, found on Stephens and Maud islands, stays hidden for the majority of the day when temperatures are warmer and are mainly active at night. They will only come out during the day if there are humid and cool conditions.
Many plant species native to arid biomes have adapted so that their flowers only open at night when the sun's intense heat cannot wither and destroy their moist, delicate blossoms. These flowers are pollinated by bats, another creature of the night.
Climate-change and the change in global temperatures has led to an increasing amount of diurnal species to push their activity patterns closer towards crepuscular or fully nocturnal behavior. This adaptive measure allows species to avoid the heat of the day, without having to leave that particular habitat.
Human disturbances
The exponential increase in human expansion and technological advances in the last few centuries has had a major effect on nocturnal animals, as well as diurnal species. The causes of these can be traced to distinct, sometimes overlapping areas: light pollution and spatial disturbance.
Light pollution
Light pollution is a major issue for nocturnal species, and the impact continues to increase as electricity reaches parts of the world that previously had no access. Species in the tropics are generally more affected by this due to the change in their relatively constant light patterns, but temperate species relying on day-night triggers for behavioral patterns are also affected as well. Many diurnal species see the benefit of a "longer day", allowing for a longer hunting period which is detrimental to their nocturnal prey trying to avoid them.
Orientation
Light pollution can disorient species that are used to darkness, as their adaptive eyes are not as used to the artificial lighting. Insects are the most obvious example, who are attracted by the lighting and are usually killed by either the heat or electrical current. Some species of frogs are blinded by the quick changes in light, while nocturnal migratory birds may be disoriented, causing them to lose direction, tire out, or be captured by predators. Sea turtles are particularly affected by this, adding to a number of threats to the different endangered species. Adults are likely to stay away from artificially lit beaches that they might prefer to lay eggs on, as there is less cover against predators. Additionally, baby sea turtles that hatch from eggs on artificially lit beaches often get lost, heading towards the light sources as opposed to the ocean.
Rhythmic behaviors
Rhythmic behaviors are affected by light pollution both seasonally and daily patterns. Migrating birds or mammals might have issues with the timing of their movement for example. On a day-to-day basis, species can see significant changes in their internal temperatures, their general movement, feeding and body mass. These small scale changes can eventually lead to a population decline, as well as hurting local trophic levels and interconnecting species. Some typically diurnal species have even become crepuscular or nocturnal as a result of light pollution and general human disturbance.
Reproduction
There have been documented effects of light pollution on reproductive cycles and factors in different species. It can affect mate choice, migration to breeding grounds, and nest site selection. In male green frogs, artificial light causes a decrease in mate calls and continued to move around instead of waiting for a potential mate to arrive. This hurts the overall fitness of the species, which is concerning considering the overall decrease in amphibian populations.
Predation
Some nocturnal predator-prey relationships are interrupted by artificial lighting. Bats that are fast-moving are often at an advantage with insects being drawn to light; they are fast enough to escape any predators also attracted to the light, leaving slow-moving bats at a disadvantage. Another example is harbor seals eating juvenile salmon that moved down a river lit by nearby artificial lighting. Once the lights were turned off, predation levels decreased. Many diurnal prey species forced into being nocturnal are susceptible to nocturnal predators and those species with poor nocturnal eyesight often bear the brunt of the cost.
Spatial disturbance
The increasing amount of habitat destruction worldwide as a result of human expansion has given both advantages and disadvantages to different nocturnal animals. As a result of peak human activity in the daytime, more species are likely to be active at night in order to avoid the new disturbance in their habitat. Carnivorous predators however are less timid of the disturbance, feeding on human waste and keeping a relatively similar spatial habitat as they did before. In comparison, herbivorous prey tend to stay in areas where human disturbance is low, limiting both resources and their spatial habitat. This leads to an imbalance in favor of predators, who increase in population and come out more often at night.
In captivity
Zoos
In zoos, nocturnal animals are usually kept in special night-illumination enclosures to invert their normal sleep-wake cycle and to keep them active during the hours when visitors will be there to see them.
Pets
Hedgehogs and sugar gliders are just two of the many nocturnal species kept as (exotic) pets. Cats have adapted to domestication so that each individual, whether stray alley cat or pampered housecat, can change their activity level at will, becoming nocturnal or diurnal in response to their environment or the routine of their owners. Cats normally demonstrate crepuscular behavior, bordering nocturnal, being most active in hunting and exploration at dusk and dawn.
See also
Adaptation
Antipredator adaptation
Competitive exclusion principle
Crepuscular
Crypsis
Diurnality
List of nocturnal animals
List of nocturnal birds
Niche (ecology)
Niche differentiation
Night owl (person)
Tapetum lucidum
References
Antipredator adaptations
Behavioral ecology
Biological interactions
Chronobiology
Circadian rhythm
Ethology
Predation
Sleep | Nocturnality | [
"Biology"
] | 2,177 | [
"Nocturnal animals",
"Behavior",
"Animals",
"Biological interactions",
"Behavioral ecology",
"Biological defense mechanisms",
"Behavioural sciences",
"Circadian rhythm",
"Antipredator adaptations",
"nan",
"Chronobiology",
"Ethology",
"Sleep"
] |
63,337 | https://en.wikipedia.org/wiki/Supersaturation | In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
History
Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds". Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Occurrence and examples
Solid precipitate, liquid solvent
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases with decreasing temperature; in such cases the excess of solute will rapidly separate from the solution as crystals or an amorphous powder.
In a few cases the opposite effect occurs. The example of sodium sulfate in water is well-known and this was why it was used in early studies of solubility.
Recrystallization is a process used to purify chemical compounds. A mixture of the impure compound and solvent is heated until the compound has dissolved. If there is some solid impurity remaining it is removed by filtration. When the temperature of the solution is subsequently lowered it briefly becomes supersaturated and then the compound crystallizes out until chemical equilibrium at the lower temperature is achieved. Impurities remain in the supernatant liquid. In some cases crystals do not form quickly and the solution remains supersaturated after cooling. This is because there is a thermodynamic barrier to the formation of a crystal in a liquid medium. Commonly this is overcome by adding a tiny crystal of the solute compound to the supersaturated solution, a process known as "seeding". Another process in common use is to rub a rod on the side of a glass vessel containing the solution to release microscopic glass particles which can act as nucleation centres. In industry, centrifugation is used to separate the crystals from the supernatant liquid.
Some compounds and mixtures of compounds can form long-living supersaturated solutions. Carbohydrates are a class of such compounds; The thermodynamic barrier to formation of crystals is rather high because of extensive and irregular hydrogen bonding with the solvent, water. For example, although sucrose can be recrystallised easily, its hydrolysis product, known as "invert sugar" or "golden syrup" is a mixture of glucose and fructose that exists as a viscous, supersaturated, liquid. Clear honey contains carbohydrates which may crystallize over a period of weeks.
Supersaturation may be encountered when attempting to crystallize a protein.
Gaseous solute, liquid solvent
The solubility of a gas in a liquid increases with increasing gas pressure. When the external pressure is reduced, the excess gas comes out of solution.
Fizzy drinks are made by subjecting the liquid to carbon dioxide, under pressure. In champagne the CO2 is produced naturally in the final stage of fermentation. When the bottle or can is opened some gas is released in the form of bubbles.
Release of gas from supersaturated tissues can cause an underwater diver to suffer from decompression sickness (a.k.a. the bends) when returning to the surface. This can be fatal if the released gas obstructs critical blood supplies causing ischaemia in vital tissues.
Dissolved gases can be released during oil exploration when a strike is made. This occurs because the oil in oil-bearing rock is under considerable pressure from the over-lying rock, allowing the oil to be supersaturated with respect to dissolved gases.
Liquid formation from a mixture of gases
A cloudburst is an extreme form of production of liquid water from a supersaturated mixture of air and water vapour in the atmosphere. Supersaturation in the vapour phase is related to the surface tension of liquids through the Kelvin equation, the Gibbs–Thomson effect and the Poynting effect.
The International Association for the Properties of Water and Steam (IAPWS) provides a special equation for the Gibbs free energy in the metastable-vapor region of water in its Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam. All thermodynamic properties for the metastable-vapor region of water can be derived from this equation by means of the appropriate relations of thermodynamic properties to the Gibbs free energy.
Measurement
When measuring the concentration of a solute in a supersaturated gaseous or liquid mixture it is obvious that the pressure inside the cuvette may be greater than the ambient pressure. When this is so a specialized cuvette must be used. The choice of analytical technique to use will depend on the characteristics of the analyte.
Applications
The characteristics of supersaturation have practical applications in terms of pharmaceuticals. By creating a supersaturated solution of a certain drug, it can be ingested in liquid form. The drug can be made driven into a supersaturated state through any normal mechanism and then prevented from precipitating out by adding precipitation inhibitors. Drugs in this state are referred to as "supersaturating drug delivery services," or "SDDS." Oral consumption of a drug in this form is simple and allows for the measurement of very precise dosages. Primarily, it provides a means for drugs with very low solubility to be made into aqueous solutions. In addition, some drugs can undergo supersaturation inside the body despite being ingested in a crystalline form. This phenomenon is known as in vivo supersaturation.
The identification of supersaturated solutions can be used as a tool for marine ecologists to study the activity of organisms and populations. Photosynthetic organisms release O2 gas into the water. Thus, an area of the ocean supersaturated with O2 gas can likely determined to be rich with photosynthetic activity. Though some O2 will naturally be found in the ocean due to simple physical chemical properties, upwards of 70% of all oxygen gas found in supersaturated regions can be attributed to photosynthetic activity.
Supersaturation in vapor phase is usually present in the expansion process through steam nozzles that operate with superheated steam at the inlet, which transitions to saturated state at the outlet. Supersaturation thus becomes an important factor to be taken into account in the design of steam turbines, as this results in an actual mass flow of steam through the nozzle being about 1 to 3% greater than the theoretically calculated value that would be expected if the expanding steam underwent a reversible adiabatic process through equilibrium states. In these cases supersaturation occurs due to the fact that the expansion process develops so rapidly and in such a short time, that the expanding vapor cannot reach its equilibrium state in the process, behaving as if it were superheated. Hence the determination of the expansion ratio, relevant to the calculation of the mass flow through the nozzle, must be done using an adiabatic index of approximately 1.3, like that of the superheated steam, instead of 1.135, which is the value that should have to be used for a quasi-static adiabatic expansion in the saturated region.
The study of supersaturation is also relevant to atmospheric studies. Since the 1940s, the presence of supersaturation in the atmosphere has been known. When water is supersaturated in the troposphere, the formation of ice lattices is frequently observed. In a state of saturation, the water particles will not form ice under tropospheric conditions. It is not enough for molecules of water to form an ice lattice at saturation pressures; they require a surface to condense on to or conglomerations of liquid water molecules of water to freeze. For these reasons, relative humidities over ice in the atmosphere can be found above 100%, meaning supersaturation has occurred. Supersaturation of water is actually very common in the upper troposphere, occurring between 20% and 40% of the time. This can be determined using satellite data from the Atmospheric Infrared Sounder.
References
Thermodynamics
Atmospheric thermodynamics
Underwater diving physics | Supersaturation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,982 | [
"Applied and interdisciplinary physics",
"Thermodynamics",
"Underwater diving physics",
"Dynamical systems"
] |
63,778 | https://en.wikipedia.org/wiki/Uncertainty | Uncertainty or incertitude refers to situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, medicine, psychology, sociology, engineering, metrology, meteorology, ecology and information science.
Concepts
Although the terms are used in various ways among the general public, many specialists in decision theory, statistics and other quantitative fields have defined uncertainty, risk, and their measurement as:
Uncertainty
The lack of certainty, a state of limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome.
Measurement
Uncertainty can be measured through a set of possible states or outcomes where probabilities are assigned to each possible state or outcome – this also includes the application of a probability density function to continuous variables.
Second-order uncertainty
In statistics and economics, second-order uncertainty is represented in probability density functions over (first-order) probabilities.
Opinions in subjective logic carry this type of uncertainty.
Risk
Risk is a state of uncertainty, where some possible outcomes have an undesired effect or significant loss. Measurement of risk includes a set of measured uncertainties, where some possible outcomes are losses, and the magnitudes of those losses. This also includes loss functions over continuous variables.
Uncertainty versus variability
There is a difference between uncertainty and variability. Uncertainty is quantified by a probability distribution which depends upon knowledge about the likelihood of what the single, true value of the uncertain quantity is. Variability is quantified by a distribution of frequencies of multiple instances of the quantity, derived from observed data.
Knightian uncertainty
In economics, in 1921 Frank Knight distinguished uncertainty from risk with uncertainty being lack of knowledge which is immeasurable and impossible to calculate. Because of the absence of clearly defined statistics in most economic decisions where people face uncertainty, he believed that we cannot measure probabilities in such cases; this is now referred to as Knightian uncertainty.
Knight pointed out that the unfavorable outcome of known risks can be insured during the decision-making process because it has a clearly defined expected probability distribution. Unknown risks have no known expected probability distribution, which can lead to extremely risky company decisions.
Other taxonomies of uncertainties and decisions include a broader sense of uncertainty and how it should be approached from an ethics perspective:
Risk and uncertainty
For example, if it is unknown whether or not it will rain tomorrow, then there is a state of uncertainty. If probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine. If there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is a business event and $100,000 would be lost if it rains, then the risk has been quantified (a 10% chance of losing $100,000). These situations can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc.
Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% × $100,000 = $10,000). That is useful if the organizer of the event is "risk neutral", which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.
Quantitative uses of the terms uncertainty and risk are fairly consistent among fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example, surprisal is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely. In cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc.
Vagueness is a form of uncertainty where the analyst is unable to clearly differentiate between two different classes, such as 'person of average height' and 'tall person'. This form of vagueness can be modelled by some variation on Zadeh's fuzzy logic or subjective logic.
Ambiguity is a form of uncertainty where even the possible outcomes have unclear meanings and interpretations. The statement "He returns from the bank" is ambiguous because its interpretation depends on whether the word 'bank' is meant as "the side of a river" or "a financial institution". Ambiguity typically arises in situations where multiple analysts or observers have different interpretations of the same statements.
At the subatomic level, uncertainty may be a fundamental and unavoidable property of the universe. In quantum mechanics, the Heisenberg uncertainty principle puts limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows.
Radical uncertainty
The term 'radical uncertainty' was popularised by John Kay and Mervyn King in their book Radical Uncertainty: Decision-Making for an Unknowable Future, published in March 2020. It is distinct from Knightian uncertainty, by whether or not it is 'resolvable'. If uncertainty arises from a lack of knowledge, and that lack of knowledge is resolvable by acquiring knowledge (such as by primary or secondary research) then it is not radical uncertainty. Only when there are no means available to acquire the knowledge which would resolve the uncertainty, is it considered 'radical'.
In measurements
The most commonly used procedure for calculating measurement uncertainty is described in the "Guide to the Expression of Uncertainty in Measurement" (GUM) published by ISO. A derived work is for example the National Institute of Standards and Technology (NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement". The uncertainty of the result of a measurement generally consists of several components. The components are regarded as random variables, and may be grouped into two categories according to the method used to estimate their numerical values:
Type A, those evaluated by statistical methods
Type B, those evaluated by other means, e.g., by assigning a probability distribution
By propagating the variances of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the standard deviation of a repeated observation.
In metrology, physics, and engineering, the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:
measured value ± uncertainty
measured value
measured value (uncertainty)
In the last notation, parentheses are the concise notation for the ± notation. For example, applying 10 meters in a scientific or engineering application, it could be written or , by convention meaning accurate to within one tenth of a meter, or one hundredth. The precision is symmetric around the last digit. In this case it's half a tenth up and half a tenth down, so 10.5 means between 10.45 and 10.55. Thus it is understood that 10.5 means , and 10.50 means , also written and respectively. But if the accuracy is within two tenths, the uncertainty is ± one tenth, and it is required to be explicit: and or and . The numbers in parentheses apply to the numeral left of themselves, and are not part of that number, but part of a notation of uncertainty. They apply to the least significant digits. For instance, stands for , while stands for . This concise notation is used for example by IUPAC in stating the atomic mass of elements.
The middle notation is used when the error is not symmetrical about the value – for example . This can occur when using a logarithmic scale, for example.
Uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements. This procedure neglects systematic errors, however.
When the uncertainty represents the standard error of the measurement, then about 68.3% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.7% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals.
In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.
In the media
Uncertainty in science, and science in general, may be interpreted differently in the public sphere than in the scientific community. This is due in part to the diversity of the public audience, and the tendency for scientists to misunderstand lay audiences and therefore not communicate ideas clearly and effectively. One example is explained by the information deficit model. Also, in the public realm, there are often many scientific voices giving input on a single topic. For example, depending on how an issue is reported in the public sphere, discrepancies between outcomes of multiple scientific studies due to methodological differences could be interpreted by the public as a lack of consensus in a situation where a consensus does in fact exist. This interpretation may have even been intentionally promoted, as scientific uncertainty may be managed to reach certain goals. For example, climate change deniers took the advice of Frank Luntz to frame global warming as an issue of scientific uncertainty, which was a precursor to the conflict frame used by journalists when reporting the issue.
"Indeterminacy can be loosely said to apply to situations in which not all the parameters of the system and their interactions are fully known, whereas ignorance refers to situations in which it is not known what is not known." These unknowns, indeterminacy and ignorance, that exist in science are often "transformed" into uncertainty when reported to the public in order to make issues more manageable, since scientific indeterminacy and ignorance are difficult concepts for scientists to convey without losing credibility. Conversely, uncertainty is often interpreted by the public as ignorance. The transformation of indeterminacy and ignorance into uncertainty may be related to the public's misinterpretation of uncertainty as ignorance.
Journalists may inflate uncertainty (making the science seem more uncertain than it really is) or downplay uncertainty (making the science seem more certain than it really is). One way that journalists inflate uncertainty is by describing new research that contradicts past research without providing context for the change. Journalists may give scientists with minority views equal weight as scientists with majority views, without adequately describing or explaining the state of scientific consensus on the issue. In the same vein, journalists may give non-scientists the same amount of attention and importance as scientists.
Journalists may downplay uncertainty by eliminating "scientists' carefully chosen tentative wording, and by losing these caveats the information is skewed and presented as more certain and conclusive than it really is". Also, stories with a single source or without any context of previous research mean that the subject at hand is presented as more definitive and certain than it is in reality. There is often a "product over process" approach to science journalism that aids, too, in the downplaying of uncertainty. Finally, and most notably for this investigation, when science is framed by journalists as a triumphant quest, uncertainty is erroneously framed as "reducible and resolvable".
Some media routines and organizational factors affect the overstatement of uncertainty; other media routines and organizational factors help inflate the certainty of an issue. Because the general public (in the United States) generally trusts scientists, when science stories are covered without alarm-raising cues from special interest organizations (religious groups, environmental organizations, political factions, etc.) they are often covered in a business related sense, in an economic-development frame or a social progress frame. The nature of these frames is to downplay or eliminate uncertainty, so when economic and scientific promise are focused on early in the issue cycle, as has happened with coverage of plant biotechnology and nanotechnology in the United States, the matter in question seems more definitive and certain.
Sometimes, stockholders, owners, or advertising will pressure a media organization to promote the business aspects of a scientific issue, and therefore any uncertainty claims which may compromise the business interests are downplayed or eliminated.
Applications
Uncertainty is designed into games, most notably in gambling, where chance is central to play.
In scientific modelling, in which the prediction of future events should be understood to have a range of expected values
In computer science, and in particular data management, uncertain data is commonplace and can be modeled and stored within an uncertain database
In optimization, uncertainty permits one to describe situations where the user does not have full control on the outcome of the optimization procedure, see scenario optimization and stochastic optimization.
In weather forecasting, it is now commonplace to include data on the degree of uncertainty in a weather forecast.
Uncertainty or error is used in science and engineering notation. Numerical values should only have to be expressed in those digits that are physically meaningful, which are referred to as significant figures. Uncertainty is involved in every measurement, such as measuring a distance, a temperature, etc., the degree depending upon the instrument or technique used to make the measurement. Similarly, uncertainty is propagated through calculations so that the calculated value has some degree of uncertainty depending upon the uncertainties of the measured values and the equation used in the calculation.
In physics, the Heisenberg uncertainty principle forms the basis of modern quantum mechanics.
In metrology, measurement uncertainty is a central concept quantifying the dispersion one may reasonably attribute to a measurement result. Such an uncertainty can also be referred to as a measurement error.
In daily life, measurement uncertainty is often implicit ("He is 6 feet tall" give or take a few inches), while for any serious use an explicit statement of the measurement uncertainty is necessary. The expected measurement uncertainty of many measuring instruments (scales, oscilloscopes, force gages, rulers, thermometers, etc.) is often stated in the manufacturers' specifications.
In engineering, uncertainty can be used in the context of validation and verification of material modeling.
Uncertainty has been a common theme in art, both as a thematic device (see, for example, the indecision of Hamlet), and as a quandary for the artist (such as Martin Creed's difficulty with deciding what artworks to make).
Uncertainty is an important factor in economics. According to economist Frank Knight, it is different from risk, where there is a specific probability assigned to each outcome (as when flipping a fair coin). Knightian uncertainty involves a situation that has unknown probabilities.
Investing in financial markets such as the stock market involves Knightian uncertainty when the probability of a rare but catastrophic event is unknown.
Philosophy
In Western philosophy the first philosopher to embrace uncertainty was Pyrrho resulting in the Hellenistic philosophies of Pyrrhonism and Academic Skepticism, the first schools of philosophical skepticism. Aporia and acatalepsy represent key concepts in ancient Greek philosophy regarding uncertainty.
William MacAskill, a philosopher at Oxford University, has also discussed the concept of Moral Uncertainty. Moral Uncertainty is "uncertainty about how to act given lack of certainty in any one moral theory, as well as the study of how we ought to act given this uncertainty."
Artificial intelligence
See also
Certainty
Dempster–Shafer theory
Further research is needed
Fuzzy set theory
Game theory
Information entropy
Interval finite element
Keynes' Treatise on Probability
Measurement uncertainty
Morphological analysis (problem-solving)
Propagation of uncertainty
Randomness
Schrödinger's cat
Scientific consensus
Statistical mechanics
Subjective logic
Uncertainty quantification
Uncertainty tolerance
Volatility, uncertainty, complexity and ambiguity
References
Further reading
"Treading Thin Air: Geoff Mann on Uncertainty and Climate Change", London Review of Books, vol. 45, no. 17 (7 September 2023), pp. 17–19. "[W]e are in desperate need of a politics that looks [the] catastrophic uncertainty [of global warming and climate change] square in the face. That would mean taking much bigger and more transformative steps: all but eliminating fossil fuels... and prioritizing democratic institutions over markets. The burden of this effort must fall almost entirely on the richest people and richest parts of the world, because it is they who continue to gamble with everyone else's fate." (p. 19.)
External links
Measurement Uncertainties in Science and Technology, Springer 2005
Proposal for a New Error Calculus
Estimation of Measurement Uncertainties — an Alternative to the ISO Guide
Bibliography of Papers Regarding Measurement Uncertainty
Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results
Strategic Engineering: Designing Systems and Products under Uncertainty (MIT Research Group)
Understanding Uncertainty site from Cambridge's Winton programme
Cognition
Concepts in epistemology
Doubt
Experimental physics
Measurement
Probability interpretations
Prospect theory
Economics of uncertainty | Uncertainty | [
"Physics",
"Mathematics"
] | 3,918 | [
"Physical quantities",
"Quantity",
"Probability interpretations",
"Measurement",
"Size",
"Experimental physics"
] |
63,944 | https://en.wikipedia.org/wiki/Key-agreement%20protocol | In cryptography, a key-agreement protocol is a protocol whereby two (or more) parties generate a cryptographic key as a function of information provided by each honest party so that no party can predetermine the resulting value.
In particular, all honest participants influence the outcome. A key-agreement protocol is a specialisation of a key-exchange protocol.
At the completion of the protocol, all parties share the same key. A key-agreement protocol precludes undesired third parties from forcing a key choice on the agreeing parties. A secure key agreement can ensure confidentiality and data integrity in communications systems, ranging from simple messaging applications to complex banking transactions.
Secure agreement is defined relative to a security model, for example the Universal Model. More generally, when evaluating protocols, it is important to state security goals and the security model. For example, it may be required for the session key to be authenticated. A protocol can be evaluated for success only in the context of its goals and attack model. An example of an adversarial model is the Dolev–Yao model.
In many key exchange systems, one party generates the key, and sends that key to the other party; the other party has no influence on the key.
Exponential key exchange
The first publicly known public-key agreement protocol that meets the above criteria was the Diffie–Hellman key exchange, in which two parties jointly exponentiate a generator with random numbers, in such a way that an eavesdropper cannot feasibly determine what the resultant shared key is.
Exponential key agreement in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol.
Symmetric key agreement
Symmetric key agreement (SKA) is a method of key agreement that uses solely symmetric cryptography and cryptographic hash functions as cryptographic primitives. It is related to symmetric authenticated key exchange.
SKA may assume the use of initial shared secrets or a trusted third party with whom the agreeing parties share a secret is assumed. If no third party is present, then achieving SKA can be trivial: we tautologically assume that two parties that share an initial secret and have achieved SKA.
SKA contrasts with key-agreement protocols that include techniques from asymmetric cryptography, such as key encapsulation mechanisms.
The initial exchange of a shared key must be done in a manner that is private and integrity-assured. Historically, this was achieved by physical means, such as by using a trusted courier.
An example of a SKA protocol is the Needham–Schroeder protocol. It establishes a session key between two parties on the same network, using a server as a trusted third party.
The original Needham–Schroeder protocol is vulnerable to a replay attack. Timestamps and nonces are included to fix this attack. It forms the basis for the Kerberos protocol.
Types of key agreement
Boyd et al. classify two-party key agreement protocols according to two criteria as follows:
whether a pre-shared key already exists or not
the method of generating the session key.
The pre-shared key may be shared between the two parties, or each party may share a key with a trusted third party. If there is no secure channel (as may be established via a pre-shared key), it is impossible to create an authenticated session key.
The session key may be generated via: key transport, key agreement and hybrid. If there is no trusted third party, then the cases of key transport and hybrid session key generation are indistinguishable. SKA is concerned with protocols in which the session key is established using only symmetric primitives.
Authentication
Anonymous key exchange, like Diffie–Hellman, does not provide authentication of the parties, and is thus vulnerable to man-in-the-middle attacks.
A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as the following:
public–private key pairs
shared secret keys
passwords
Public keys
A widely used mechanism for defeating such attacks is the use of digitally signed keys that must be integrity-assured: if Bob's key is signed by a trusted third party vouching for his identity, Alice can have considerable confidence that a signed key she receives is not an attempt to intercept by Eve. When Alice and Bob have a public-key infrastructure, they may digitally sign an agreed Diffie–Hellman key, or exchanged Diffie–Hellman public keys. Such signed keys, sometimes signed by a certificate authority, are one of the primary mechanisms used for secure web traffic (including HTTPS, SSL or TLS protocols). Other specific examples are MQV, YAK and the ISAKMP component of the IPsec protocol suite for securing Internet Protocol communications. However, these systems require care in endorsing the match between identity information and public keys by certificate authorities in order to work properly.
Hybrid systems
Hybrid systems use public-key cryptography to exchange secret keys, which are then used in a symmetric-key cryptography systems. Most practical applications of cryptography use a combination of cryptographic functions to implement an overall system that provides all of the four desirable features of secure communications (confidentiality, integrity, authentication, and non-repudiation).
Passwords
Password-authenticated key agreement protocols require the separate establishment of a password (which may be smaller than a key) in a manner that is both private and integrity-assured. These are designed to resist man-in-the-middle and other active attacks on the password and the established keys. For example, DH-EKE, SPEKE, and SRP are password-authenticated variations of Diffie–Hellman.
Other tricks
If one has an integrity-assured way to verify a shared key over a public channel, one may engage in a Diffie–Hellman key exchange to derive a short-term shared key, and then subsequently authenticate that the keys match. One way is to use a voice-authenticated read-out of the key, as in PGPfone. Voice authentication, however, presumes that it is infeasible for a man-in-the-middle to spoof one participant's voice to the other in real-time, which may be an undesirable assumption. Such protocols may be designed to work with even a small public value, such as a password. Variations on this theme have been proposed for Bluetooth pairing protocols.
In an attempt to avoid using any additional out-of-band authentication factors, Davies and Price proposed the use of the interlock protocol of Ron Rivest and Adi Shamir, which has been subject to both attack and subsequent refinement.
See also
Key (cryptography)
Computer security
Cryptanalysis
Secure channel
Digital signature
Key encapsulation mechanism
Key management
Password-authenticated key agreement
Interlock protocol
Zero-knowledge password proof
Quantum key distribution
References
Cryptography | Key-agreement protocol | [
"Mathematics",
"Engineering"
] | 1,443 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
64,020 | https://en.wikipedia.org/wiki/Multiprocessing | Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.).
According to some on-line dictionaries, a multiprocessor is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. A 2009 textbook defined multiprocessor system similarly, but noted that the processors may share "some or all of the system’s memory and I/O facilities"; it also gave tightly coupled system as a synonymous term.
At the operating system level, multiprocessing is sometimes used to refer to the execution of multiple concurrent processes in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant. When used with this definition, multiprocessing is sometimes contrasted with multitasking, which may use just a single processor but switch it in time slices between tasks (i.e. a time-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term parallel processing is generally used to denote that scenario. Other authors prefer to refer to the operating system techniques as multiprogramming and reserve the term multiprocessing for the hardware aspect of having more than one processor. The remainder of this article discusses multiprocessing only in this hardware sense.
In Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains message passing multicomputer systems.
Key topics
Processor symmetry
In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas user-mode code may be executed in any combination of processors. Multiprocessing systems are often easier to design if such restrictions are imposed, but they tend to be less efficient than systems in which all CPUs are utilized.
Systems that treat all CPUs equally are called symmetric multiprocessing (SMP) systems. In systems where all CPUs are not equal, system resources may be divided in a number of ways, including asymmetric multiprocessing (ASMP), non-uniform memory access (NUMA) multiprocessing, and clustered multiprocessing.
Master/slave multiprocessor system
In a master/slave multiprocessor system, the master CPU is in control of the computer and the slave CPU(s) performs assigned tasks. The CPUs can be completely different in terms of speed and architecture. Some (or all) of the CPUs can share a common bus, each can also have a private bus (for private resources), or they may be isolated except for a common communications pathway. Likewise, the CPUs can share common RAM and/or have private RAM that the other processor(s) cannot access. The roles of master and slave can change from one CPU to another.
Two early examples of a mainframe master/slave multiprocessor are the Bull Gamma 60 and the Burroughs B5000.
An early example of a master/slave multiprocessor system of microprocessors is the Tandy/Radio Shack TRS-80 Model 16 desktop computer which came out in February 1982 and ran the multi-user/multi-tasking Xenix operating system, Microsoft's version of UNIX (called TRS-XENIX). The Model 16 has two microprocessors: an 8-bit Zilog Z80 CPU running at 4 MHz, and a 16-bit Motorola 68000 CPU running at 6 MHz. When the system is booted, the Z-80 is the master and the Xenix boot process initializes the slave 68000, and then transfers control to the 68000, whereupon the CPUs change roles and the Z-80 becomes a slave processor responsible for all I/O operations including disk, communications, printer and network, as well as the keyboard and integrated monitor, while the operating system and applications run on the 68000 CPU. The Z-80 can be used to do other tasks.
The earlier TRS-80 Model II, which was released in 1979, could also be considered a multiprocessor system as it had both a Z-80 CPU and an Intel 8021 microcontroller in the keyboard. The 8021 made the Model II the first desktop computer system with a separate detachable lightweight keyboard connected with by a single thin flexible wire, and likely the first keyboard to use a dedicated microcontroller, both attributes that would later be copied years later by Apple and IBM.
Instruction and data streams
In multiprocessing, the processors can be used to execute a single sequence of instructions in multiple contexts (single instruction, multiple data or SIMD, often used in vector processing), multiple sequences of instructions in a single context (multiple instruction, single data or MISD, used for redundancy in fail-safe systems and sometimes applied to describe pipelined processors or hyper-threading), or multiple sequences of instructions in multiple contexts (multiple instruction, multiple data or MIMD).
Processor coupling
Tightly coupled multiprocessor system
Tightly coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both local and shared memory (SM)(NUMA). The IBM p690 Regatta is an example of a high end SMP system. Intel Xeon processors dominated the multiprocessor market for business PCs and were the only major x86 option until the release of AMD's Opteron range of processors in 2004. Both ranges of processors had their own onboard cache but provided access to shared memory; the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system RAM.
Chip multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly coupled multiprocessing. Mainframe systems with multiple processors are often tightly coupled.
Loosely coupled multiprocessor system
Loosely coupled multiprocessor systems (often referred to as clusters) are based on multiple standalone relatively low processor count commodity computers interconnected via a high speed communication system (Gigabit Ethernet is common). A Linux Beowulf cluster is an example of a loosely coupled system.
Tightly coupled systems perform better and are physically smaller than loosely coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster.
Power consumption is also a consideration. Tightly coupled systems tend to be much more energy-efficient than clusters. This is because a considerable reduction in power consumption can be realized by designing components to work together from the beginning in tightly coupled systems, whereas loosely coupled systems use components that were not necessarily intended specifically for use in such systems.
Loosely coupled systems have the ability to run different operating systems or OS versions on different systems.
Disadvantages
Merging data from multiple threads or processes may incur significant overhead due to conflict resolution, data consistency, versioning, and synchronization.
See also
Multiprocessor system architecture
Symmetric multiprocessing
Asymmetric multiprocessing
Multi-core processor
BMDFM – Binary Modular Dataflow Machine, a SMP MIMD runtime environment
Software lockout
OpenHMPP
References
Parallel computing
Classes of computers
Computing terminology | Multiprocessing | [
"Technology"
] | 1,767 | [
"Classes of computers",
"Computing terminology",
"Computers",
"Computer systems"
] |
64,045 | https://en.wikipedia.org/wiki/Chromosomal%20crossover | Chromosomal crossover, or crossing over, is the exchange of genetic material during sexual reproduction between two homologous chromosomes' non-sister chromatids that results in recombinant chromosomes. It is one of the final phases of genetic recombination, which occurs in the pachytene stage of prophase I of meiosis during a process called synapsis. Synapsis begins before the synaptonemal complex develops and is not completed until near the end of prophase I. Crossover usually occurs when matching regions on matching chromosomes break and then reconnect to the other chromosome.
Crossing over was described, in theory, by Thomas Hunt Morgan; the term crossover was coined by Morgan and Eleth Cattell. Hunt relied on the discovery of Frans Alfons Janssens who described the phenomenon in 1909 and had called it "chiasmatypie". The term chiasma is linked, if not identical, to chromosomal crossover. Morgan immediately saw the great importance of Janssens' cytological interpretation of chiasmata to the experimental results of his research on the heredity of Drosophila. The physical basis of crossing over was first demonstrated by Harriet Creighton and Barbara McClintock in 1931.
The linked frequency of crossing over between two gene loci (markers) is the crossing-over value. For fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant and the same is then true for the crossing-over value which is used in the production of genetic maps.
When Hotta et al. in 1977 compared meiotic crossing-over (recombination) in lily and mouse they concluded that diverse eukaryotes share a common pattern. This finding suggested that chromosomal crossing over is a general characteristic of eukaryotic meiosis.
Origins
There are two popular and overlapping theories that explain the origins of crossing-over, coming from the different theories on the origin of meiosis. The first theory rests upon the idea that meiosis evolved as another method of DNA repair, and thus crossing-over is a novel way to replace possibly damaged sections of DNA. The second theory comes from the idea that meiosis evolved from bacterial transformation, with the function of propagating diversity.
In 1931, Barbara McClintock discovered a triploid maize plant. She made key findings regarding corn's karyotype, including the size and shape of the chromosomes. McClintock used the prophase and metaphase stages of mitosis to describe the morphology of corn's chromosomes, and later showed the first ever cytological demonstration of crossing over in meiosis. Working with student Harriet Creighton, McClintock also made significant contributions to the early understanding of codependency of linked genes.
DNA repair theory
Crossing over and DNA repair are very similar processes, which utilize many of the same protein complexes.
In her report, "The Significance of Responses of the Genome to Challenge", McClintock studied corn to show how corn's genome would change itself to overcome threats to its survival. She used 450 self-pollinated plants that received from each parent a chromosome with a ruptured end. She used modified patterns of gene expression on different sectors of leaves of her corn plants to show that transposable elements ("controlling elements") hide in the genome, and their mobility allows them to alter the action of genes at different loci. These elements can also restructure the genome, anywhere from a few nucleotides to whole segments of chromosome.
Recombinases and primases lay a foundation of nucleotides along the DNA sequence. One such particular protein complex that is conserved between processes is RAD51, a well conserved recombinase protein that has been shown to be crucial in DNA repair as well as cross over. Several other genes in D. melanogaster have been linked as well to both processes, by showing that mutants at these specific loci cannot undergo DNA repair or crossing over. Such genes include mei-41, mei-9, hdm, , and brca2. This large group of conserved genes between processes supports the theory of a close evolutionary relationship.
Furthermore, DNA repair and crossover have been found to favor similar regions on chromosomes. In an experiment using radiation hybrid mapping on wheat's (Triticum aestivum L.) 3B chromosome, crossing over and DNA repair were found to occur predominantly in the same regions. Furthermore, crossing over has been correlated to occur in response to stressful, and likely DNA damaging, conditions.
Links to bacterial transformation
The process of bacterial transformation also shares many similarities with chromosomal cross over, particularly in the formation of overhangs on the sides of the broken DNA strand, allowing for the annealing of a new strand. Bacterial transformation itself has been linked to DNA repair many times. The second theory comes from the idea that meiosis evolved from bacterial transformation, with the function of propagating genetic diversity. Thus, this evidence suggests that it is a question of whether cross over is linked to DNA repair or bacterial transformation, as the two do not appear to be mutually exclusive. It is likely that crossing over may have evolved from bacterial transformation, which in turn developed from DNA repair, thus explaining the links between all three processes.
Chemistry
Meiotic recombination may be initiated by double-stranded breaks that are introduced into the DNA by exposure to DNA damaging agents, or the Spo11 protein. One or more exonucleases then digest the 5' ends generated by the double-stranded breaks to produce 3' single-stranded DNA tails (see diagram). The meiosis-specific recombinase Dmc1 and the general recombinase Rad51 coat the single-stranded DNA to form nucleoprotein filaments. The recombinases catalyze invasion of the opposite chromatid by the single-stranded DNA from one end of the break. Next, the 3' end of the invading DNA primes DNA synthesis, causing displacement of the complementary strand, which subsequently anneals to the single-stranded DNA generated from the other end of the initial double-stranded break. The structure that results is a cross-strand exchange, also known as a Holliday junction. The contact between two chromatids that will soon undergo crossing-over is known as a chiasma. The Holliday junction is a tetrahedral structure which can be 'pulled' by other recombinases, moving it along the four-stranded structure.
MSH4 and MSH5
The MSH4 and MSH5 proteins form a hetero-oligomeric structure (heterodimer) in yeast and humans. In the yeast Saccharomyces cerevisiae, MSH4 and MSH5 act specifically to facilitate crossovers between homologous chromosomes during meiosis. The MSH4/MSH5 complex binds and stabilizes double Holliday junctions and promotes their resolution into crossover products. An MSH4 hypomorphic (partially functional) mutant of S. cerevisiae showed a 30% genome-wide reduction in crossover numbers and a large number of meioses with non-exchange chromosomes. Nevertheless, this mutant gave rise to spore viability patterns suggesting that segregation of non-exchange chromosomes occurred efficiently. Thus in S. cerevisiae proper segregation apparently does not entirely depend on crossovers between homologous pairs.
Chiasma
The grasshopper Melanoplus femur-rubrum was exposed to an acute dose of X-rays during each individual stage of meiosis, and chiasma frequency was measured. Irradiation during the leptotene-zygotene stages of meiosis (that is, prior to the pachytene period in which crossover recombination occurs) was found to increase subsequent chiasma frequency. Similarly, in the grasshopper Chorthippus brunneus, exposure to X-irradiation during the zygotene-early pachytene stages caused a significant increase in mean cell chiasma frequency. Chiasma frequency was scored at the later diplotene-diakinesis stages of meiosis. These results suggest that X-rays induce DNA damages that are repaired by a crossover pathway leading to chiasma formation.
Class I and class II crossovers
Double strand breaks (DSBs) are repaired by two pathways to generate crossovers in eukaryotes. The majority of them are repaired by MutL homologs MLH1 and MLH3, which defines the class I crossovers. The remaining are the result of the class II pathway, which is regulated by MUS81 endonuclease and FANCM translocase. There are interconnections between these two pathways—class I crossovers can compensate for the loss of class II pathway. In MUS81 knockout mice, class I crossovers are elevated, while total crossover counts at chiasmata are normal. However, the mechanisms underlining this crosstalk are not well understood. A recent study suggests that a scaffold protein called SLX4 may participate in this regulation. Specifically, SLX4 knockout mice largely phenocopies the MUS81 knockout—once again, an elevated class I crossovers while normal chiasmata count. In FANCM knockout mice, the class II pathway is hyperactivated, resulting in increased numbers of crossovers that are independent of the MLH1/MLH3 pathway.
Consequences
In most eukaryotes, a cell carries two versions of each gene, each referred to as an allele. Each parent passes on one allele to each offspring. An individual gamete inherits a complete haploid complement of alleles on chromosomes that are independently selected from each pair of chromatids lined up on the metaphase plate. Without recombination, all alleles for those genes linked together on the same chromosome would be inherited together. Meiotic recombination allows a more independent segregation between the two alleles that occupy the positions of single genes, as recombination shuffles the allele content between homologous chromosomes.
Recombination results in a new arrangement of maternal and paternal alleles on the same chromosome. Although the same genes appear in the same order, some alleles are different. In this way, it is theoretically possible to have any combination of parental alleles in an offspring, and the fact that two alleles appear together in one offspring does not have any influence on the statistical probability that another offspring will have the same combination. This principle of "independent assortment" of genes is fundamental to genetic inheritance.
However, the frequency of recombination is actually not the same for all gene combinations. This leads to the notion of "genetic distance", which is a measure of recombination frequency averaged over a (suitably large) sample of pedigrees. Loosely speaking, one may say that this is because recombination is greatly influenced by the proximity of one gene to another. If two genes are located close together on a chromosome, the likelihood that a recombination event will separate these two genes is less than if they were farther apart. Genetic linkage describes the tendency of genes to be inherited together as a result of their location on the same chromosome. Linkage disequilibrium describes a situation in which some combinations of genes or genetic markers occur more or less frequently in a population than would be expected from their distances apart. This concept is applied when searching for a gene that may cause a particular disease. This is done by comparing the occurrence of a specific DNA sequence with the appearance of a disease. When a high correlation between the two is found, it is likely that the appropriate gene sequence is really closer
Non-homologous crossover
Crossovers typically occur between homologous regions of matching chromosomes, but similarities in sequence and other factors can result in mismatched alignments. Most DNA is composed of base pair sequences repeated very large numbers of times. These repetitious segments, often referred to as satellites, are fairly homogeneous among a species. During DNA replication, each strand of DNA is used as a template for the creation of new strands using a partially-conserved mechanism; proper functioning of this process results in two identical, paired chromosomes, often called sisters. Sister chromatid crossover events are known to occur at a rate of several crossover events per cell per division in eukaryotes. Most of these events involve an exchange of equal amounts of genetic information, but unequal exchanges may occur due to sequence mismatch. These are referred to by a variety of names, including non-homologous crossover, unequal crossover, and unbalanced recombination, and result in an insertion or deletion of genetic information into the chromosome. While rare compared to homologous crossover events, these mutations are drastic, affecting many loci at the same time. They are considered the main driver behind the generation of gene duplications and are a general source of mutation within the genome.
The specific causes of non-homologous crossover events are unknown, but several influential factors are known to increase the likelihood of an unequal crossover. One common vector leading to unbalanced recombination is the repair of double-strand breaks (DSBs). DSBs are often repaired using homology directed repair, a process which involves invasion of a template strand by the DSB strand (see figure below). Nearby homologous regions of the template strand are often used for repair, which can give rise to either insertions or deletions in the genome if a non-homologous but complementary part of the template strand is used. Sequence similarity is a major player in crossover – crossover events are more likely to occur in long regions of close identity on a gene. This means that any section of the genome with long sections of repetitive DNA is prone to crossover events.
The presence of transposable elements is another influential element of non-homologous crossover. Repetitive regions of code characterize transposable elements; complementary but non-homologous regions are ubiquitous within transposons. Because chromosomal regions composed of transposons have large quantities of identical, repetitious code in a condensed space, it is thought that transposon regions undergoing a crossover event are more prone to erroneous complementary match-up; that is to say, a section of a chromosome containing a lot of identical sequences, should it undergo a crossover event, is less certain to match up with a perfectly homologous section of complementary code and more prone to binding with a section of code on a slightly different part of the chromosome. This results in unbalanced recombination, as genetic information may be either inserted or deleted into the new chromosome, depending on where the recombination occurred.
While the motivating factors behind unequal recombination remain obscure, elements of the physical mechanism have been elucidated. Mismatch repair (MMR) proteins, for instance, are a well-known regulatory family of proteins, responsible for regulating mismatched sequences of DNA during replication and escape regulation. The operative goal of MMRs is the restoration of the parental genotype. One class of MMR in particular, MutSβ, is known to initiate the correction of insertion-deletion mismatches of up to 16 nucleotides. Little is known about the excision process in eukaryotes, but E. coli excisions involve the cleaving of a nick on either the 5' or 3' strand, after which DNA helicase and DNA polymerase III bind and generate single-stranded proteins, which are digested by exonucleases and attached to the strand by ligase. Multiple MMR pathways have been implicated in the maintenance of complex organism genome stability, and any of many possible malfunctions in the MMR pathway result in DNA editing and correction errors. Therefore, while it is not certain precisely what mechanisms lead to errors of non-homologous crossover, it is extremely likely that the MMR pathway is involved.
See also
Unequal crossing over
Coefficient of coincidence
Genetic distance
Independent assortment
Mitotic crossover
Recombinant frequency
References
Cellular processes
Modification of genetic information
Molecular genetics | Chromosomal crossover | [
"Chemistry",
"Biology"
] | 3,367 | [
"Modification of genetic information",
"Molecular genetics",
"Cellular processes",
"Molecular biology"
] |
64,204 | https://en.wikipedia.org/wiki/Kinetic%20theory%20of%20gases | The kinetic theory of gases is a simple classical model of the thermodynamic behavior of gases. Its introduction allowed many principal concepts of thermodynamics to be established. It treats a gas as composed of numerous particles, too small to be seen with a microscope, in constant, random motion. These particles are now known to be the atoms or molecules of the gas. The kinetic theory of gases uses their collisions with each other and with the walls of their container to explain the relationship between the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity.
The basic version of the model describes an ideal gas. It treats the collisions as perfectly elastic and as the only interaction between the particles, which are additionally assumed to be much smaller than their average distance apart.
Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations.
The theory was historically significant as the first explicit exercise of the ideas of statistical mechanics.
History
Kinetic theory of matter
Antiquity
In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant.
Modern era
"Heat is motion"
One of the first and boldest statements on the relationship between motion of particles and heat was by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In 1623, in The Assayer, Galileo Galilei, in turn, argued that heat, pressure, smell and other phenomena perceived by our senses are apparent properties only, caused by the movement of particles, which is a real phenomenon.
In 1665, in Micrographia, the English polymath Robert Hooke repeated Bacon's assertion, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle noted that a hammer's "impulse" is transformed into the motion of a nail's constituent particles, and that this type of motion is what heat consists of. Boyle also believed that all macroscopic properties, including color, taste and elasticity, are caused by and ultimately consist of nothing but the arrangement and motion of indivisible particles of matter. In a lecture of 1681, Hooke asserted a direct relationship between the temperature of an object and the speed of its internal particles. "Heat ... is nothing but the internal Motion of the Particles of [a] Body; and the hotter a Body is, the more violently are the Particles moved." In a manuscript published 1720, the English philosopher John Locke made a very similar statement: "What in our sensation is heat, in the object is nothing but motion." Locke too talked about the motion of the internal particles of the object, which he referred to as its "insensible parts".
In his 1744 paper Meditations on the Cause of Heat and Cold, Russian polymath Mikhail Lomonosov made a relatable appeal to everyday experience to gain acceptance of the microscopic and kinetic nature of matter and heat:Lomonosov also insisted that movement of particles is necessary for the processes of dissolution, extraction and diffusion, providing as examples the dissolution and diffusion of salts by the action of water particles on the of the “molecules of salt”, the dissolution of metals in mercury, and the extraction of plant pigments by alcohol.
Also the transfer of heat was explained by the motion of particles. Around 1760, Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another."
Kinetic theory of gases
In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.
Pioneers of the kinetic theory, whose work was also largely neglected by their contemporaries, were Mikhail Lomonosov (1747), Georges-Louis Le Sage (ca. 1780, published 1818), John Herapath (1816) and John James Waterston (1843), which connected their research with the development of mechanical explanations of gravitation.
In 1856 August Krönig created a simple gas-kinetic model, which only considered the translational motion of the particles. In 1857 Rudolf Clausius developed a similar, but more sophisticated version of the theory, which included translational and, contrary to Krönig, also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. In 1859, after reading a paper about the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases."
In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. The logarithmic connection between entropy and probability was also first stated by Boltzmann.
At the beginning of the 20th century, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905) and Marian Smoluchowski's (1906) papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory.
Following the development of the Boltzmann equation, a framework for its use in developing transport equations was developed independently by David Enskog and Sydney Chapman in 1917 and 1916. The framework provided a route to prediction of the transport properties of dilute gases, and became known as Chapman–Enskog theory. The framework was gradually expanded throughout the following century, eventually becoming a route to prediction of transport properties in real, dense gases.
Assumptions
The application of kinetic theory to ideal gases makes the following assumptions:
The gas consists of very small particles. This smallness of their size is such that the sum of the volume of the individual gas molecules is negligible compared to the volume of the container of the gas. This is equivalent to stating that the average distance separating the gas particles is large compared to their size, and that the elapsed time during a collision between particles and the container's wall is negligible when compared to the time between successive collisions.
The number of particles is so large that a statistical treatment of the problem is well justified. This assumption is sometimes referred to as the thermodynamic limit.
The rapidly moving particles constantly collide among themselves and with the walls of the container, and all these collisions are perfectly elastic.
Interactions (i.e. collisions) between particles are strictly binary and uncorrelated, meaning that there are no three-body (or higher) interactions, and the particles have no memory.
Except during collisions, the interactions among molecules are negligible. They exert no other forces on one another.
Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible.
As a simplifying assumption, the particles are usually assumed to have the same mass as one another; however, the theory can be generalized to a mass distribution, with each mass type contributing to the gas properties independently of one another in agreement with Dalton's law of partial pressures. Many of the model's predictions are the same whether or not collisions between particles are included, so they are often neglected as a simplifying assumption in derivations (see below).
More modern developments, such as the revised Enskog theory and the extended Bhatnagar–Gross–Krook model, relax one or more of the above assumptions. These can accurately describe the properties of dense gases, and gases with internal degrees of freedom, because they include the volume of the particles as well as contributions from intermolecular and intramolecular forces as well as quantized molecular rotations, quantum rotational-vibrational symmetry effects, and electronic excitation. While theories relaxing the assumptions that the gas particles occupy negligible volume and that collisions are strictly elastic have been successful, it has been shown that relaxing the requirement of interactions being binary and uncorrelated will eventually lead to divergent results.
Equilibrium properties
Pressure and kinetic energy
In the kinetic theory of gases, the pressure is assumed to be equal to the force (per unit area) exerted by the individual gas atoms or molecules hitting and rebounding from the gas container's surface.
Consider a gas particle traveling at velocity, , along the -direction in an enclosed volume with characteristic length, , cross-sectional area, , and volume, . The gas particle encounters a boundary after characteristic time
The momentum of the gas particle can then be described as
We combine the above with Newton's second law, which states that the force experienced by a particle is related to the time rate of change of its momentum, such that
Now consider a large number, , of gas particles with random orientation in a three-dimensional volume. Because the orientation is random, the average particle speed, , in every direction is identical
Further, assume that the volume is symmetrical about its three dimensions, , such that
The total surface area on which the gas particles act is therefore
The pressure exerted by the collisions of the gas particles with the surface can then be found by adding the force contribution of every particle and dividing by the interior surface area of the volume,
The total translational kinetic energy of the gas is defined as
providing the result
This is an important, non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the translational kinetic energy of the molecules, which is a microscopic property.
Temperature and kinetic energy
Rewriting the above result for the pressure as , we may combine it with the ideal gas law
where is the Boltzmann constant and is the absolute temperature defined by the ideal gas law, to obtain
which leads to a simplified expression of the average translational kinetic energy per molecule,
The translational kinetic energy of the system is times that of a molecule, namely . The temperature, is related to the translational kinetic energy by the description above, resulting in
which becomes
Equation () is one important result of the kinetic theory:
The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature.
From equations () and (), we have
Thus, the product of pressure and volume per mole is proportional to the average
translational molecular kinetic energy.
Equations () and () are called the "classical results", which could also be derived from statistical mechanics;
for more details, see:
The equipartition theorem requires that kinetic energy is partitioned equally between all kinetic degrees of freedom, D. A monatomic gas is axially symmetric about each spatial axis, so that D = 3 comprising translational motion along each axis. A diatomic gas is axially symmetric about only one axis, so that D = 5, comprising translational motion along three axes and rotational motion along two axes. A polyatomic gas, like water, is not radially symmetric about any axis, resulting in D = 6, comprising 3 translational and 3 rotational degrees of freedom.
Because the equipartition theorem requires that kinetic energy is partitioned equally, the total kinetic energy is
Thus, the energy added to the system per gas particle kinetic degree of freedom is
Therefore, the kinetic energy per kelvin of one mole of monatomic ideal gas (D = 3) is
where is the Avogadro constant, and R is the ideal gas constant.
Thus, the ratio of the kinetic energy to the absolute temperature of an ideal monatomic gas can be calculated easily:
per mole: 12.47 J/K
per molecule: 20.7 yJ/K = 129 μeV/K
At standard temperature (273.15 K), the kinetic energy can also be obtained:
per mole: 3406 J
per molecule: 5.65 zJ = 35.2 meV.
At higher temperatures (typically thousands of kelvins), vibrational modes become active to provide additional degrees of freedom, creating a temperature-dependence on D and the total molecular energy. Quantum statistical mechanics is needed to accurately compute these contributions.
Collisions with container wall
For an ideal gas in equilibrium, the rate of collisions with the container wall and velocity distribution of particles hitting the container wall can be calculated based on naive kinetic theory, and the results can be used for analyzing effusive flow rates, which is useful in applications such as the gaseous diffusion method for isotope separation.
Assume that in the container, the number density (number per unit volume) is and that the particles obey Maxwell's velocity distribution:
Then for a small area on the container wall, a particle with speed at angle from the normal of the area , will collide with the area within time interval , if it is within the distance from the area . Therefore, all the particles with speed at angle from the normal that can reach area within time interval are contained in the tilted pipe with a height of and a volume of .
The total number of particles that reach area within time interval also depends on the velocity distribution; All in all, it calculates to be:
Integrating this over all appropriate velocities within the constraint , , yields the number of atomic or molecular collisions with a wall of a container per unit area per unit time:
This quantity is also known as the "impingement rate" in vacuum physics. Note that to calculate the average speed of the Maxwell's velocity distribution, one has to integrate over , , .
The momentum transfer to the container wall from particles hitting the area with speed at angle from the normal, in time interval is:
Integrating this over all appropriate velocities within the constraint , , yields the pressure (consistent with Ideal gas law):
If this small area is punched to become a small hole, the effusive flow rate will be:
Combined with the ideal gas law, this yields
The above expression is consistent with Graham's law.
To calculate the velocity distribution of particles hitting this small area, we must take into account that all the particles with that hit the area within the time interval are contained in the tilted pipe with a height of and a volume of ; Therefore, compared to the Maxwell distribution, the velocity distribution will have an extra factor of :
with the constraint , , . The constant can be determined by the normalization condition to be , and overall:
Speed of molecules
From the kinetic energy formula it can be shown that
where v is in m/s, T is in kelvin, and m is the mass of one molecule of gas in kg. The most probable (or mode) speed is 81.6% of the root-mean-square speed , and the mean (arithmetic mean, or average) speed is 92.1% of the rms speed (isotropic distribution of speeds).
See:
Average,
Root-mean-square speed
Arithmetic mean
Mean
Mode (statistics)
Mean free path
In kinetic theory of gases, the mean free path is the average distance traveled by a molecule, or a number of molecules per volume, before they make their first collision. Let be the collision cross section of one molecule colliding with another. As in the previous section, the number density is defined as the number of molecules per (extensive) volume, or . The collision cross section per volume or collision cross section density is , and it is related to the mean free path by
Notice that the unit of the collision cross section per volume is reciprocal of length.
Transport properties
The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means using Kinetic Theory to consider what are known as "transport properties", such as viscosity, thermal conductivity, mass diffusivity and thermal diffusion.
In its most basic form, Kinetic gas theory is only applicable to dilute gases. The extension of Kinetic gas theory to dense gas mixtures, Revised Enskog Theory, was developed in 1983-1987 by E. G. D. Cohen, J. M. Kincaid and M. Lòpez de Haro, building on work by H. van Beijeren and M. H. Ernst.
Viscosity and kinetic momentum
In books on elementary kinetic theory one can find results for dilute gas modeling that are used in many fields. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. The upper plate is moving at a constant velocity to the right due to a force F. The lower plate is stationary, and an equal and opposite force must therefore be acting on it to keep it at rest. The molecules in the gas layer have a forward velocity component which increase uniformly with distance above the lower plate. The non-equilibrium flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Inside a dilute gas in a Couette flow setup, let be the forward velocity of the gas at a horizontal flat layer (labeled as ); is along the horizontal direction. The number of molecules arriving at the area on one side of the gas layer, with speed at angle from the normal, in time interval is
These molecules made their last collision at , where is the mean free path. Each molecule will contribute a forward momentum of
where plus sign applies to molecules from above, and minus sign below. Note that the forward velocity gradient can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint , , yields the forward momentum transfer per unit time per unit area (also known as shear stress):
The net rate of momentum per unit area that is transported across the imaginary surface is thus
Combining the above kinetic equation with Newton's law of viscosity
gives the equation for shear viscosity, which is usually denoted when it is a dilute gas:
Combining this equation with the equation for mean free path gives
Maxwell-Boltzmann distribution gives the average (equilibrium) molecular speed as
where is the most probable speed. We note that
and insert the velocity in the viscosity equation above. This gives the well known equation (with subsequently estimated below) for shear viscosity for dilute gases:
and is the molar mass. The equation above presupposes that the gas density is low (i.e. the pressure is low). This implies that the transport of momentum through the gas due to the translational motion of molecules is much larger than the transport due to momentum being transferred between molecules during collisions. The transfer of momentum between molecules is explicitly accounted for in Revised Enskog theory, which relaxes the requirement of a gas being dilute. The viscosity equation further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic and hard core particles of spherical shape. This assumption of elastic, hard core spherical molecules, like billiard balls, implies that the collision cross section of one molecule can be estimated by
The radius is called collision cross section radius or kinetic radius, and the diameter is called collision cross section diameter or kinetic diameter of a molecule in a monomolecular gas. There are no simple general relation between the collision cross section and the hard core size of the (fairly spherical) molecule. The relation depends on shape of the potential energy of the molecule. For a real spherical molecule (i.e. a noble gas atom or a reasonably spherical molecule) the interaction potential is more like the Lennard-Jones potential or Morse potential which have a negative part that attracts the other molecule from distances longer than the hard core radius. The radius for zero Lennard-Jones potential may then be used as a rough estimate for the kinetic radius. However, using this estimate will typically lead to an erroneous temperature dependency of the viscosity. For such interaction potentials, significantly more accurate results are obtained by numerical evaluation of the required collision integrals.
The expression for viscosity obtained from Revised Enskog Theory reduces to the above expression in the limit of infinite dilution, and can be written as
where is a term that tends to zero in the limit of infinite dilution that accounts for excluded volume, and is a term accounting for the transfer of momentum over a non-zero distance between particles during a collision.
Thermal conductivity and heat flux
Following a similar logic as above, one can derive the kinetic model for thermal conductivity of a dilute gas:
Consider two parallel plates separated by a gas layer. Both plates have uniform temperatures, and are so massive compared to the gas layer that they can be treated as thermal reservoirs. The upper plate has a higher temperature than the lower plate. The molecules in the gas layer have a molecular kinetic energy which increases uniformly with distance above the lower plate. The non-equilibrium energy flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Let be the molecular kinetic energy of the gas at an imaginary horizontal surface inside the gas layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is
These molecules made their last collision at a distance above and below the gas layer, and each will contribute a molecular kinetic energy of
where is the specific heat capacity. Again, plus sign applies to molecules from above, and minus sign below. Note that the temperature gradient can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint , , yields the energy transfer per unit time per unit area (also known as heat flux):
Note that the energy transfer from above is in the direction, and therefore the overall minus sign in the equation. The net heat flux across the imaginary surface is thus
Combining the above kinetic equation with Fourier's law
gives the equation for thermal conductivity, which is usually denoted when it is a dilute gas:
Similarly to viscosity, Revised Enskog Theory yields an expression for thermal conductivity that reduces to the above expression in the limit of infinite dilution, and which can be written as
where is a term that tends to unity in the limit of infinite dilution, accounting for excluded volume, and is a term accounting for the transfer of energy across a non-zero distance between particles during a collision.
Diffusion coefficient and diffusion flux
Following a similar logic as above, one can derive the kinetic model for mass diffusivity of a dilute gas:
Consider a steady diffusion between two regions of the same gas with perfectly flat and parallel boundaries separated by a layer of the same gas. Both regions have uniform number densities, but the upper region has a higher number density than the lower region. In the steady state, the number density at any point is constant (that is, independent of time). However, the number density in the layer increases uniformly with distance above the lower plate. The non-equilibrium molecular flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions.
Let be the number density of the gas at an imaginary horizontal surface inside the layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is
These molecules made their last collision at a distance above and below the gas layer, where the local number density is
Again, plus sign applies to molecules from above, and minus sign below. Note that the number density gradient can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint , , yields the molecular transfer per unit time per unit area (also known as diffusion flux):
Note that the molecular transfer from above is in the direction, and therefore the overall minus sign in the equation. The net diffusion flux across the imaginary surface is thus
Combining the above kinetic equation with Fick's first law of diffusion
gives the equation for mass diffusivity, which is usually denoted when it is a dilute gas:
The corresponding expression obtained from Revised Enskog Theory may be written as
where is a factor that tends to unity in the limit of infinite dilution, which accounts for excluded volume and the variation chemical potentials with density.
Detailed balance
Fluctuation and dissipation
The kinetic theory of gases entails that due to the microscopic reversibility of the gas particles' detailed dynamics, the system must obey the principle of detailed balance. Specifically, the fluctuation-dissipation theorem applies to the Brownian motion (or diffusion) and the drag force, which leads to the Einstein–Smoluchowski equation:
where
is the mass diffusivity;
is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, ;
is the Boltzmann constant;
is the absolute temperature.
Note that the mobility can be calculated based on the viscosity of the gas; Therefore, the Einstein–Smoluchowski equation also provides a relation between the mass diffusivity and the viscosity of the gas.
Onsager reciprocal relations
The mathematical similarities between the expressions for shear viscocity, thermal conductivity and diffusion coefficient of the ideal (dilute) gas is not a coincidence; It is a direct result of the Onsager reciprocal relations (i.e. the detailed balance of the reversible dynamics of the particles), when applied to the convection (matter flow due to temperature gradient, and heat flow due to pressure gradient) and advection (matter flow due to the velocity of particles, and momentum transfer due to pressure gradient) of the ideal (dilute) gas.
See also
Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations
Boltzmann equation
Chapman–Enskog theory
Collision theory
Critical temperature
Gas laws
Heat
Interatomic potential
Magnetohydrodynamics
Maxwell–Boltzmann distribution
Mixmaster universe
Thermodynamics
Vicsek model
Vlasov equation
References
Citations
Sources cited
de Groot, S. R., W. A. van Leeuwen and Ch. G. van Weert (1980), Relativistic Kinetic Theory, North-Holland, Amsterdam.
Liboff, R. L. (1990), Kinetic Theory, Prentice-Hall, Englewood Cliffs, N. J.
(reprinted in his Papers, 3, 167, 183.)
Further reading
Sydney Chapman and Thomas George Cowling (1939/1970), The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, (first edition 1939, second edition 1952), third edition 1970 prepared in co-operation with D. Burnett, Cambridge University Press, London
Joseph Oakland Hirschfelder, Charles Francis Curtiss, and Robert Byron Bird (1964), Molecular Theory of Gases and Liquids, revised edition (Wiley-Interscience), ISBN 978-0471400653
Richard Lawrence Liboff (2003), Kinetic Theory: Classical, Quantum, and Relativistic Descriptions, third edition (Springer), ISBN 978-0-387-21775-8
Behnam Rahimi and Henning Struchtrup (2016), "Macroscopic and kinetic modelling of rarefied polyatomic gases", Journal of Fluid Mechanics, 806, 437–505, DOI 10.1017/jfm.2016.604
External links
Early Theories of Gases
Thermodynamics - a chapter from an online textbook
Temperature and Pressure of an Ideal Gas: The Equation of State on Project PHYSNET.
Introduction to the kinetic molecular theory of gases, from The Upper Canada District School Board
Java animation illustrating the kinetic theory from University of Arkansas
Flowchart linking together kinetic theory concepts, from HyperPhysics
Interactive Java Applets allowing high school students to experiment and discover how various factors affect rates of chemical reactions.
https://www.youtube.com/watch?v=47bF13o8pb8&list=UUXrJjdDeqLgGjJbP1sMnH8A A demonstration apparatus for the thermal agitation in gases.
Gases
Thermodynamics
Classical mechanics | Kinetic theory of gases | [
"Physics",
"Chemistry",
"Mathematics"
] | 6,052 | [
"Matter",
"Phases of matter",
"Classical mechanics",
"Mechanics",
"Thermodynamics",
"Statistical mechanics",
"Gases",
"Dynamical systems"
] |
64,219 | https://en.wikipedia.org/wiki/Bernoulli%27s%20principle | Bernoulli's principle is a key concept in fluid dynamics that relates pressure, density, speed and height. Bernoulli's principle states that an increase in the speed of a parcel of fluid occurs simultaneously with a decrease in either the pressure or the height above a datum. The principle is named after the Swiss mathematician and physicist Daniel Bernoulli, who published it in his book Hydrodynamica in 1738. Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form.
Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid is the same at all points that are free of viscous forces. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid—implying an increase in its kinetic energy—occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ) is the same everywhere.
Bernoulli's principle can also be derived directly from Isaac Newton's second Law of Motion. When fluid is flowing horizontally from a region of high pressure to a region of low pressure, there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline.
Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest.
Bernoulli's principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. thermal radiation) are small and can be neglected. However, the principle can be applied to various types of flow within these bounds, resulting in various forms of Bernoulli's equation. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers.
Incompressible flow equation
In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible, and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow.
A common form of Bernoulli's equation is:
where:
is the fluid flow speed at a point,
is the acceleration due to gravity,
is the elevation of the point above a reference plane, with the positive -direction pointing upward—so in the direction opposite to the gravitational acceleration,
is the static pressure at the chosen point, and
is the density of the fluid at all points in the fluid.
Bernoulli's equation and the Bernoulli constant are applicable throughout any region of flow where the energy per unit mass is uniform. Because the energy per unit mass of liquid in a well-mixed reservoir is uniform throughout, Bernoulli's equation can be used to analyze the fluid flow everywhere in that reservoir (including pipes or flow fields that the reservoir feeds) except where viscous forces dominate and erode the energy per unit mass.
The following assumptions must be met for this Bernoulli equation to apply:
the flow must be steady, that is, the flow parameters (velocity, density, etc.) at any point cannot change with time,
the flow must be incompressible—even though pressure varies, the density must remain constant along a streamline;
friction by viscous forces must be negligible.
For conservative force fields (not limited to the gravitational field), Bernoulli's equation can be generalized as:
where is the force potential at the point considered. For example, for the Earth's gravity .
By multiplying with the fluid density , equation () can be rewritten as:
or:
where
is dynamic pressure,
is the piezometric head or hydraulic head (the sum of the elevation and the pressure head) and
is the stagnation pressure (the sum of the static pressure and dynamic pressure ).
The constant in the Bernoulli equation can be normalized. A common approach is in terms of total head or energy head :
The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids—when the pressure becomes too low—cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid.
Simplified form
In many applications of Bernoulli's equation, the change in the term is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height is so small the term can be omitted. This allows the above equation to be presented in the following simplified form:
where is called total pressure, and is dynamic pressure. Many authors refer to the pressure as static pressure to distinguish it from total pressure and dynamic pressure . In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure."
The simplified form of Bernoulli's equation can be summarized in the following memorable word equation:
Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure and dynamic pressure . Their sum is defined to be the total pressure . The significance of Bernoulli's principle can now be summarized as "total pressure is constant in any region free of viscous forces". If the fluid flow is brought to rest at some point, this point is called a stagnation point, and at this point the static pressure is equal to the stagnation pressure.
If the fluid flow is irrotational, the total pressure is uniform and Bernoulli's principle can be summarized as "total pressure is constant everywhere in the fluid flow". It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight and ships moving in open bodies of water. However, Bernoulli's principle importantly does not apply in the boundary layer such as in flow through long pipes.
Unsteady potential flow
The Bernoulli equation for unsteady potential flow is used in the theory of ocean surface waves and acoustics. For an irrotational flow, the flow velocity can be described as the gradient of a velocity potential . In that case, and for a constant density , the momentum equations of the Euler equations can be integrated to:
which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here denotes the partial derivative of the velocity potential with respect to time , and is the flow speed. The function depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment applies in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case and are constants so equation () can be applied in every point of the fluid domain. Further can be made equal to zero by incorporating it into the velocity potential using the transformation:
resulting in:
Note that the relation of the potential to the flow velocity is unaffected by this transformation: .
The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian mechanics.
Compressible flow equation
Bernoulli developed his principle from observations on liquids, and Bernoulli's equation is valid for ideal fluids: those that are incompressible, irrotational, inviscid, and subjected to conservative forces. It is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation—in its incompressible flow form—cannot be assumed to be valid. However, if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature; however, this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough.
It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics.
Compressible flow in fluid dynamics
For a compressible fluid, with a barotropic equation of state, and under the action of conservative forces,
where:
is the pressure
is the density and indicates that it is a function of pressure
is the flow speed
is the potential associated with the conservative force field, often the gravitational potential
In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation for an ideal gas becomes:
where, in addition to the terms listed above:
is the ratio of the specific heats of the fluid
is the acceleration due to gravity
is the elevation of the point above a reference plane
In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term can be omitted. A very useful form of the equation is then:
where:
is the total pressure
is the total density
Compressible flow in thermodynamics
The most general form of the equation, suitable for use in thermodynamics in case of (quasi) steady flow, is:
Here is the enthalpy per unit mass (also known as specific enthalpy), which is also often written as (not to be confused with "head" or "height").
Note that
where is the thermodynamic energy per unit mass, also known as the specific internal energy. So, for constant internal energy the equation reduces to the incompressible-flow form.
The constant on the right-hand side is often called the Bernoulli constant and denoted . For steady inviscid adiabatic flow with no additional sources or sinks of energy, is constant along any given streamline. More generally, when may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below).
When the change in can be ignored, a very useful form of this equation is:
where is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature.
When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy.
Unsteady potential flow
For a compressible fluid, with a barotropic equation of state, the unsteady momentum conservation equation
With the irrotational assumption, namely, the flow velocity can be described as the gradient of a velocity potential . The unsteady momentum conservation equation becomes
which leads to
In this case, the above equation for isentropic flow becomes:
Derivations
Applications
In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid, and a small viscosity often has a large effect on the flow.
Bernoulli's principle can be used to calculate the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force. Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations, which were established by Bernoulli over a century before the first man-made wings were used for the purpose of flight.
The carburetor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the venturi can be explained by Bernoulli's principle. In the narrow throat the air is moving at its fastest speed and therefore it is at its lowest pressure. The carburetor may or may not use the difference between the two static pressures of the Venturi effect on the air flow in order to force the fuel to flow, and for a basic carburetor uses the difference in pressure between the throat and local air pressure in the float bowl..
An injector on a steam locomotive or a static boiler.
The pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure.
A De Laval nozzle utilizes Bernoulli's principle to create a force by turning pressure energy generated by the combustion of propellants into velocity. This then generates thrust by way of Newton's third law of motion.
The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently, Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect.
The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, which is compatible with Bernoulli's principle. Increased viscosity lowers this drain rate; this is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice.
The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper.
During a cricket match, bowlers continually polish one side of the ball. After some time, one side is quite rough and the other is still smooth. Hence, when the ball is bowled and passes through air, the speed on one side of the ball is faster than on the other, and this results in a pressure difference between the sides; this leads to the ball rotating ("swinging") while travelling through the air, giving advantage to the bowlers.
Misconceptions
Airfoil lift
One of the most common erroneous explanations of aerodynamic lift asserts that the air must traverse the upper and lower surfaces of a wing in the same amount of time, implying that since the upper surface presents a longer path the air must be moving over the top of the wing faster than over the bottom. Bernoulli's principle is then cited to conclude that the pressure on top of the wing must be lower than on the bottom.
Equal transit time applies to the flow around a body generating no lift, but there is no physical principle that requires equal transit time in cases of bodies generating lift. In fact, theory predicts – and experiments confirm – that the air traverses the top surface of a body experiencing lift in a shorter time than it traverses the bottom surface; the explanation based on equal transit time is false. While the equal-time explanation is false, it is not the Bernoulli principle that is false, because this principle is well established; Bernoulli's equation is used correctly in common mathematical treatments of aerodynamic lift.
Common classroom demonstrations
There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure".
One problem with this explanation can be seen by blowing along the bottom of the paper: if the deflection was caused by faster moving air, then the paper should deflect downward; but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli's equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field.
As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields.
A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed; in other words, as the air passes over the paper, it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration.
Other common classroom demonstrations, such as blowing between two suspended spheres, inflating a large bag, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure".
See also
Torricelli's law
Coandă effect
Euler equations – for the flow of an inviscid fluid
Hydraulics – applied fluid mechanics for liquids
Navier–Stokes equations – for the flow of a viscous fluid
Teapot effect
Terminology in fluid dynamics
Notes
References
External links
The Flow of Dry Water - The Feynman Lectures on Physics
Science 101 Q: Is It Really Caused by the Bernoulli Effect?
Bernoulli equation calculator
Millersville University – Applications of Euler's equation
NASA – Beginner's guide to aerodynamics
Misinterpretations of Bernoulli's equation – Weltner and Ingelman-Sundberg
Fluid dynamics
Eponymous laws of physics
Equations of fluid dynamics
1738 in science | Bernoulli's principle | [
"Physics",
"Chemistry",
"Engineering"
] | 4,616 | [
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
64,333 | https://en.wikipedia.org/wiki/Reed%27s%20law | Reed's law is the assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network.
The reason for this is that the number of possible sub-groups of network participants is 2N − N − 1, where N is the number of participants. This grows much more rapidly than either
the number of participants, N, or
the number of possible pair connections, N(N − 1)/2 (which follows Metcalfe's law).
so that even if the utility of groups available to be joined is very small on a per-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system.
Derivation
Given a set A of N people, it has 2N possible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element of A one of two possibilities: whether to include that element, or not.
However, this includes the (one) empty set, and N singletons, which are not properly subgroups. So 2N − N − 1 subsets remain, which is exponential, like 2N.
Quote
From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4):
"[E]ven Metcalfe's law understates the value created by a group-forming network [GFN] as it grows. Let's say you have a GFN with n members. If you add up all the potential two-person groups, three-person groups, and so on that those members could form, the number of possible groups equals 2n. So the value of a GFN increases exponentially, in proportion to 2n. I call that Reed's Law. And its implications are profound."
Business implications
Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generate network effects that dominate the overall economics of the system.
Criticism
Other analysts of network value functions, including Andrew Odlyzko, have argued that both Reed's Law and Metcalfe's Law overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research around Dunbar's number implies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law.
See also
Andrew Odlyzko's "Content is Not King"
Beckstrom's law
Coase's penguin
List of eponymous laws
Metcalfe's law
Six Degrees of Kevin Bacon
Sarnoff's law
Social capital
References
External links
That Sneaky Exponential—Beyond Metcalfe's Law to the Power of Community Building
Weapon of Math Destruction: A simple formula explains why the Internet is wreaking havoc on business models.
KK-law for Group Forming Services, XVth International Symposium on Services and Local Access, Edinburgh, March 2004, presents an alternative way to model the effect of social networks.
Computer architecture statements
Eponymous laws of economics
Information theory
Network theory | Reed's law | [
"Mathematics",
"Technology",
"Engineering"
] | 720 | [
"Telecommunications engineering",
"Applied mathematics",
"Graph theory",
"Network theory",
"Computer science",
"Information theory",
"Mathematical relations"
] |
64,474 | https://en.wikipedia.org/wiki/Concatenation | In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalizations of concatenation theory, also called string theory, string concatenation is a primitive notion.
Syntax
In many programming languages, string concatenation is a binary infix operator, and in some it is written without an operator. This is implemented in different ways:
Overloading the plus sign + Example from C#: "Hello, " + "World" has the value "Hello, World".
Dedicated operator, such as . in PHP, & in Visual Basic and || in SQL. This has the advantage over reusing + that it allows implicit type conversion to string.
string literal concatenation, which means that adjacent strings are concatenated without any operator. Example from C: "Hello, " "World" has the value "Hello, World".
Implementation
In programming, string concatenation generally occurs at run time, as string values are typically not known until run time. However, in the case of string literals, the values are known at compile time, and thus string concatenation can be done at compile time, either via string literal concatenation or via constant folding, a potential run-time optimization.
Concatenation of sets of strings
In formal language theory and pattern matching (including regular expressions), the concatenation operation on strings is generalised to an operation on sets of strings as follows:
For two sets of strings S1 and S2, the concatenation S1S2 consists of all strings of the form vw where v is a string from S1 and w is a string from S2, or formally . Many authors also use concatenation of a string set and a single string, and vice versa, which are defined similarly by and . In these definitions, the string vw is the ordinary concatenation of strings v and w as defined in the introductory section.
For example, if , and , then FR denotes the set of all chess board coordinates in algebraic notation, while eR denotes the set of all coordinates of the kings' file.
In this context, sets of strings are often referred to as formal languages. The concatenation operator is usually expressed as simple juxtaposition (as with multiplication).
Algebraic properties
The strings over an alphabet, with the concatenation operation, form an associative algebraic structure with identity element the null string—a free monoid.
Sets of strings with concatenation and alternation form a semiring, with concatenation (*) distributing over alternation (+); 0 is the empty set and 1 the set consisting of just the null string.
Applications
Audio and telephony
In programming for telephony, concatenation is used to provide dynamic audio feedback to a user. For example, in a "time of day" speaking clock, concatenation is used to give the correct time by playing the appropriate recordings concatenated together. For example: "at the tone, the time will be", "eight", "thirty", "five", "and", "twenty", "five", "seconds".
The recordings themselves exist separately, but playing them one after the other provides a grammatically correct sentence to the listener.
This technique is also used in number change announcements, voice mail systems, or most telephony applications that provide dynamic feedback to the caller (e.g. moviefone, tellme, and others).
Programming for any kind of computerised public address system can also employ concatenation for dynamic public announcements (for example, flights in an airport). The system would archive recorded speech of numbers, routes or airlines, destinations, times, etc. and play them back in a specific sequence to produce a grammatically correct sentence that is announced throughout the facility.
Database theory
One of the principles of relational database design is that the fields of data tables should reflect a single characteristic of the table's subject, which means that they should not contain concatenated strings. When concatenation is desired in a report, it should be provided at the time of running the report. For example, to display the physical address of a certain customer, the data might include building number, street name, building sub-unit number, city name, state/province name, postal code, and country name, e.g., "123 Fake St Apt 4, Boulder, CO 80302, USA", which combines seven fields. However, the customers data table should not use one field to store that concatenated string; rather, the concatenation of the seven fields should happen upon running the report. The reason for such principles is that without them, the entry and updating of large volumes of data becomes error-prone and labor-intensive. Separately entering the city, state, ZIP code, and nation allows data-entry validation (such as detecting an invalid state abbreviation). Then those separate items can be used for sorting or indexing the records, such as all with "Boulder" as the city name.
Recreational mathematics
In recreational mathematics, many problems concern the properties of numbers under concatenation of their numerals in some base. Examples include home primes (primes obtained by repeatedly factoring the increasing concatenation of prime factors of a given number), Smarandache–Wellin numbers (the concatenations of the first prime numbers), and the Champernowne and Copeland–Erdős constants (the real numbers formed by the decimal representations of the positive integers and the prime numbers, respectively).
See also
Rope (data structure)
References
Citations
Sources
Formal languages
Operators (programming)
String (computer science) | Concatenation | [
"Mathematics",
"Technology"
] | 1,200 | [
"Sequences and series",
"String (computer science)",
"Mathematical structures",
"Formal languages",
"Mathematical logic",
"Computer science"
] |
64,506 | https://en.wikipedia.org/wiki/Fast%20Ethernet | In computer networking, Fast Ethernet physical layers carry traffic at the nominal rate of . The prior Ethernet speed was . Of the Fast Ethernet physical layers, 100BASE-TX is by far the most common.
Fast Ethernet was introduced in 1995 as the IEEE 802.3u standard and remained the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet. The acronym GE/FE is sometimes used for devices supporting both standards.
Nomenclature
The 100 in the media type designation refers to the transmission speed of , while the BASE refers to baseband signaling. The letter following the dash (T or F) refers to the physical medium that carries the signal (twisted pair or fiber, respectively), while the last character (X, 4, etc.) refers to the line code method used. Fast Ethernet is sometimes referred to as 100BASE-X, where X is a placeholder for the FX and TX variants.
General design
Fast Ethernet is an extension of the 10-megabit Ethernet standard. It runs on twisted pair or optical fiber cable in a star wired bus topology, similar to the IEEE standard 802.3i called 10BASE-T, itself an evolution of 10BASE5 (802.3) and 10BASE2 (802.3a). Fast Ethernet devices are generally backward compatible with existing 10BASE-T systems, enabling plug-and-play upgrades from 10BASE-T. Most switches and other networking devices with ports capable of Fast Ethernet can perform autonegotiation, sensing a piece of 10BASE-T equipment and setting the port to 10BASE-T half duplex if the 10BASE-T equipment cannot perform autonegotiation itself. The standard specifies the use of CSMA/CD for media access control. A full-duplex mode is also specified and in practice, all modern networks use Ethernet switches and operate in full-duplex mode, even as legacy devices that use half duplex still exist.
A Fast Ethernet adapter can be logically divided into a media access controller (MAC), which deals with the higher-level issues of medium availability, and a physical layer interface (PHY). The MAC is typically linked to the PHY by a four-bit 25 MHz synchronous parallel interface known as a media-independent interface (MII), or by a two-bit 50 MHz variant called reduced media independent interface (RMII). In rare cases, the MII may be an external connection but is usually a connection between ICs in a network adapter or even two sections within a single IC. The specs are written based on the assumption that the interface between MAC and PHY will be an MII but they do not require it. Fast Ethernet or Ethernet hubs may use the MII to connect to multiple PHYs for their different interfaces.
The MII fixes the theoretical maximum data bit rate for all versions of Fast Ethernet to . The information rate actually observed on real networks is less than the theoretical maximum, due to the necessary header and trailer (addressing and error-detection bits) on every Ethernet frame, and the required interpacket gap between transmissions.
Copper
100BASE-T is any of several Fast Ethernet standards for twisted pair cables, including: 100BASE-TX ( over two-pair Cat5 or better cable), 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct), 100BASE-T2 ( over two-pair Cat3 or better cable, also defunct). The segment length for a 100BASE-T cable is limited to (the same limit as 10BASE-T and gigabit Ethernet). All are or were standards under IEEE 802.3 (approved 1995). Almost all 100BASE-T installations are 100BASE-TX.
100BASE-TX
100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of wire inside a Category 5 or above cable. Cable distance between nodes can be up to . One pair is used for each direction, providing full-duplex operation at in each direction.
Like 10BASE-T, the active pairs in a standard connection are terminated on pins 1, 2, 3 and 6. Since a typical Category 5 cable contains four pairs and the performance requirements of 100BASE-TX do not exceed the capabilities of even the worst-performing pair, one typical cable can carry two 100BASE-TX links with a simple wiring adaptor on each end. Cabling is conventionally wired to one of ANSI/TIA-568's termination standards, T568A or T568B. 100BASE-TX uses pairs 2 and 3 (orange and green).
The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area network, the devices on the network (computers, printers etc.) are typically connected to a hub or switch, creating a star network. Alternatively, it is possible to connect two devices directly using a crossover cable. With today's equipment, crossover cables are generally not needed as most equipment supports auto-negotiation along with auto MDI-X to select and match speed, duplex and pairing.
With 100BASE-TX hardware, the raw bits, presented 4 bits wide clocked at 25 MHz at the MII, go through 4B5B binary encoding to generate a series of 0 and 1 symbols clocked at a 125 MHz symbol rate. The 4B5B encoding provides DC equalization and spectrum shaping. Just as in the 100BASE-FX case, the bits are then transferred to the physical medium attachment layer using NRZI encoding. However, 100BASE-TX introduces an additional, medium-dependent sublayer, which employs MLT-3 as a final encoding of the data stream before transmission, resulting in a maximum fundamental frequency of 31.25 MHz. The procedure is borrowed from the ANSI X3.263 FDDI specifications, with minor changes.
100BASE-T1
In 100BASE-T1 the data is transmitted over a single copper pair, 3 bits per symbol, each transmitted as code pair using PAM3. It supports full-duplex transmission. The twisted-pair cable is required to support 66 MHz, with a maximum length of 15 m. No specific connector is defined. The standard is intended for automotive applications or when Fast Ethernet is to be integrated into another application. It was developed as Open Alliance BroadR-Reach (OABR) before IEEE standardization.
100BASE-T2
In 100BASE-T2, standardized in IEEE 802.3y, the data is transmitted over two copper pairs, but these pairs are only required to be Category 3 rather than the Category 5 required by 100BASE-TX. Data is transmitted and received on both pairs simultaneously thus allowing full-duplex operation. Transmission uses 4 bits per symbol. The 4-bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a linear-feedback shift register. This is needed to flatten the bandwidth and emission spectrum of the signal, as well as to match transmission line properties. The mapping of the original bits to the symbol codes is not constant in time and has a fairly large period (appearing as a pseudo-random sequence). The final mapping from symbols to PAM-5 line modulation levels obeys the table on the right. 100BASE-T2 was not widely adopted but the technology developed for it is used in 1000BASE-T.
100BASE-T4
100BASE-T4 was an early implementation of Fast Ethernet. It required four twisted copper pairs of voice grade twisted pair, a lower-performing cable compared to Category 5 cable used by 100BASE-TX. Maximum distance was limited to 100 meters. One pair was reserved for transmit and one for receive, and the remaining two switched direction. The fact that three pairs were used to transmit in each direction made 100BASE-T4 inherently half-duplex. Using three cable pairs allowed it to reach while running at lower carrier frequencies, which allowed it to run on older cabling that many companies had recently installed for 10BASE-T networks.
A very unusual 8B6T code was used to convert 8 data bits into 6 base-3 digits (the signal shaping is possible as there are nearly three times as many 6-digit base-3 numbers as there are 8-digit base-2 numbers). The two resulting 3-digit base-3 symbols were sent in parallel over three pairs using 3-level pulse-amplitude modulation (PAM-3).
100BASE-T4 was not widely adopted but some of the technology developed for it is used in 1000BASE-T. Very few hubs were released with 100BASE-T4 support. Some examples include the 3com 3C250-T4 Superstack II HUB 100, IBM 8225 Fast Ethernet Stackable Hub and Intel LinkBuilder FMS 100 T4. The same applies to network interface controllers. Bridging 100BASE-T4 with 100BASE-TX required additional network equipment.
100BaseVG
Proposed and marketed by Hewlett-Packard, 100BaseVG was an alternative design using category 3 cabling and a token concept instead of CSMA/CD. It was slated for standardization as IEEE 802.12 but it quickly vanished when switched 100BASE-TX became popular. The IEEE standard was later withdrawn.
VG was similar to T4 in that it used more cable pairs combined with a lower carrier frequency to allow it to reach on voice-grade cables. It differed in the way those cables were assigned. Whereas T4 would use the two extra pairs in different directions depending on the direction of data exchange, VG instead used two transmission modes. In one, control, two pairs are used for transmission and reception as in classic Ethernet, while the other two pairs are used for flow control. In the second mode, transmission, all four are used to transfer data in a single direction. The hubs implemented a token passing scheme to choose which of the attached nodes were allowed to communicate at any given time, based on signals sent to it from the nodes using control mode. When one node was selected to become active, it would switch to transfer mode, send or receive a packet, and return to control mode.
This concept was intended to solve two problems. The first was that it eliminated the need for collision detection and thereby reduced contention on busy networks. While any particular node may find itself throttled due to heavy traffic, the network as a whole would not end up losing efficiency due to collisions and the resulting rebroadcasts. Under heavy use, the total throughput was increased compared to the other standards. The other was that the hubs could examine the payload types and schedule the nodes based on their bandwidth requirements. For instance, a node sending a video signal may not require much bandwidth but will require it to be predictable in terms of when it is delivered. A VG hub could schedule access on that node to ensure it received the transmission timeslots it needed while opening up the network at all other times to the other nodes. This style of access was known as demand priority.
Fiber optics
Fiber variants use fiber-optic cable with the listed interface types. Interfaces may be fixed or modular, often as small form-factor pluggable (SFP).
Fast Ethernet SFP ports
Fast Ethernet speed is not available on all SFP ports, but supported by some devices. An SFP port for Gigabit Ethernet should not be assumed to be backwards compatible with Fast Ethernet.
Optical interoperability
To have interoperability there are some criteria that have to be met:
Line encoding
Wavelength
Duplex mode
Media count
Media type and dimension
100BASE-X Ethernet is not backward compatible with 10BASE-F and is not forward compatible with 1000BASE-X.
100BASE-FX
100BASE-FX is a version of Fast Ethernet over optical fiber. The 100BASE-FX physical medium dependent (PMD) sublayer is defined by FDDI's PMD, so 100BASE-FX is not compatible with 10BASE-FL, the version over optical fiber.
100BASE-FX is still used for existing installation of multimode fiber where more speed is not required, like industrial automation plants.
100BASE-LFX
100BASE-LFX is a non-standard term to refer to Fast Ethernet transmission. It is very similar to 100BASE-FX but achieves longer distances up to 4–5 km over a pair of multi-mode fibers through the use of Fabry–Pérot laser transmitter running on 1310 nm wavelength. The signal attenuation per km at 1300 nm is about half the loss of 850 nm.
100BASE-SX
100BASE-SX is a version of Fast Ethernet over optical fiber standardized in TIA/EIA-785-1-2002. It is a lower-cost, shorter-distance alternative to 100BASE-FX. Because of the shorter wavelength used (850 nm) and the shorter distance supported, 100BASE-SX uses less expensive optical components (LEDs instead of lasers).
Because it uses the same wavelength as 10BASE-FL, the version of Ethernet over optical fiber, 100BASE-SX can be backward-compatible with 10BASE-FL. Cost and compatibility makes 100BASE-SX an attractive option for those upgrading from 10BASE-FL and those who do not require long distances.
100BASE-LX10
100BASE-LX10 is a version of Fast Ethernet over optical fiber standardized in 802.3ah-2004 clause 58. It has a 10 km reach over a pair of single-mode fibers.
100BASE-BX10
100BASE-BX10 is a version of Fast Ethernet over optical fiber standardized in 802.3ah-2004 clause 58. It uses an optical multiplexer to split TX and RX signals into different wavelengths on the same fiber. It has a 10 km reach over a single strand of single-mode fiber.
100BASE-EX
100BASE-EX is very similar to 100BASE-LX10 but achieves longer distances up to 40 km over a pair of single-mode fibers due to higher quality optics than a LX10, running on 1310 nm wavelength lasers. 100BASE-EX is not a formal standard but industry-accepted term. It is sometimes referred to as 100BASE-LH (long haul), and is easily confused with 100BASE-LX10 or 100BASE-ZX because the use of -LX(10), -LH, -EX, and -ZX is ambiguous between vendors.
100BASE-ZX
100BASE-ZX is a non-standard but multi-vendor term to refer to Fast Ethernet transmission using 1,550 nm wavelength to achieve distances of at least 70 km over single-mode fiber. Some vendors specify distances up to 160 km over single-mode fiber, sometimes called 100BASE-EZX. Ranges beyond 80 km are highly dependent upon the path loss of the fiber in use, specifically the attenuation figure in dB per km, the number and quality of connectors/patch panels and splices located between transceivers.
See also
List of interface bit rates
Notes
References
External links
Common Hardware Variations
Origins and History of Ethernet
IEEE802.3 standards free download
ProCurve Networking 100BASE-FX Technical Brief
Ethernet standards
Computer networking | Fast Ethernet | [
"Technology",
"Engineering"
] | 3,147 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
64,669 | https://en.wikipedia.org/wiki/De%20Morgan%27s%20laws | In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:
The negation of "A and B" is the same as "not A or not B".
The negation of "A or B" is the same as "not A and not B".
or
The complement of the union of two sets is the same as the intersection of their complements
The complement of the intersection of two sets is the same as the union of their complements
or
not (A or B) = (not A) and (not B)
not (A and B) = (not A) or (not B)
where "A or B" is an "inclusive or" meaning at least one of A or B rather than an "exclusive or" that means exactly one of A or B.
Another form of De Morgan's law is the following as seen below.
Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality.
Formal notation
The negation of conjunction rule may be written in sequent notation:
The negation of disjunction rule may be written as:
In rule form: negation of conjunction
and negation of disjunction
and expressed as truth-functional tautologies or theorems of propositional logic:
where and are propositions expressed in some formal system.
The generalized De Morgan's laws provide an equivalence for negating a conjunction or disjunction involving multiple terms.For a set of propositions , the generalized De Morgan's Laws are as follows:
These laws generalize De Morgan's original laws for negating conjunctions and disjunctions.
Substitution form
De Morgan's laws are normally shown in the compact form above, with the negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as:
This emphasizes the need to invert both the inputs and the output, as well as change the operator when doing a substitution.
Set theory
In set theory, it is often stated as "union and intersection interchange under complementation", which can be formally expressed as:
where:
is the negation of , the overline being written above the terms to be negated,
is the intersection operator (AND),
is the union operator (OR).
Unions and intersections of any number of sets
The generalized form is
where is some, possibly countably or uncountably infinite, indexing set.
In set notation, De Morgan's laws can be remembered using the mnemonic "break the line, change the sign".
Boolean algebra
In Boolean algebra, similarly, this law which can be formally expressed as:
where:
is the negation of , the overline being written above the terms to be negated,
is the logical conjunction operator (AND),
is the logical disjunction operator (OR).
which can be generalized to
Engineering
In electrical and computer engineering, De Morgan's laws are commonly written as:
and
where:
is the logical AND,
is the logical OR,
the is the logical NOT of what is underneath the overbar.
Text searching
De Morgan's laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words "cats" and "dogs". De Morgan's laws hold that these two searches will return the same set of documents:
Search A: NOT (cats OR dogs)
Search B: (NOT cats) AND (NOT dogs)
The corpus of documents containing "cats" or "dogs" can be represented by four documents:
Document 1: Contains only the word "cats".
Document 2: Contains only "dogs".
Document 3: Contains both "cats" and "dogs".
Document 4: Contains neither "cats" nor "dogs".
To evaluate Search A, clearly the search "(cats OR dogs)" will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4.
Evaluating Search B, the search "(NOT cats)" will hit on documents that do not contain "cats", which is Documents 2 and 4. Similarly the search "(NOT dogs)" will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.
A similar evaluation can be applied to show that the following two searches will both return Documents 1, 2, and 4:
Search C: NOT (cats AND dogs),
Search D: (NOT cats) OR (NOT dogs).
History
The laws are named after Augustus De Morgan (1806–1871), who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by the algebraization of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians. For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out. Jean Buridan, in his , also describes rules of conversion that follow the lines of De Morgan's laws. Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan's laws can be proved easily, and may even seem trivial. Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments.
Proof for Boolean algebra
De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula.
Negation of a disjunction
In the case of its application to a disjunction, consider the following claim: "it is false that either of A or B is true", which is written as:
In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as:
If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that "since two things are both false, it is also false that either of them is true".
Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that "not A" and "not B" are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim.
Negation of a conjunction
The application of De Morgan's theorem to conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: "it is false that A and B are both true", which is written as:
In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of "not A" and "not B" must be true). This may be written directly as,
Presented in English, this follows the logic that "since it is false that two things are both true, at least one of them must be false".
Working in the opposite direction again, the second expression asserts that at least one of "not A" and "not B" must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim.
Proof for set theory
Here we use to denote the complement of A, as above in . The proof that is completed in 2 steps by proving both and .
Part 1
Let . Then, .
Because , it must be the case that or .
If , then , so .
Similarly, if , then , so .
Thus, ;
that is, .
Part 2
To prove the reverse direction, let , and for contradiction assume .
Under that assumption, it must be the case that ,
so it follows that and , and thus and .
However, that means , in contradiction to the hypothesis that ,
therefore, the assumption must not be the case, meaning that .
Hence, ,
that is, .
Conclusion
If and , then ; this concludes the proof of De Morgan's law.
The other De Morgan's law, , is proven similarly.
Generalising De Morgan duality
In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is needed to find the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory.
Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be the operator defined by
Extension to predicate and modal logic
This duality can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals:
To relate these quantifier dualities to the De Morgan laws, consider a domain of discourse D (with some small number of entities) to which properties are ascribed universally and existentially, such as
D = {a, b, c}.
Then express universal quantifier equivalently by conjunction of individual statements
and existential quantifier by disjunction of individual statements
But, using De Morgan's laws,
and
verifying the quantifier dualities in the model.
Then, the quantifier dualities can be extended further to modal logic, relating the box ("necessarily") and diamond ("possibly") operators:
In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics.
In intuitionistic logic
Three out of the four implications of de Morgan's laws hold in intuitionistic logic. Specifically, we have
and
The converse of the last implication does not hold in pure intuitionistic logic. That is, the failure of the joint proposition cannot necessarily be resolved to the failure of either of the two conjuncts. For example, from knowing it not to be the case that both Alice and Bob showed up to their date, it does not follow who did not show up. The latter principle is equivalent to the principle of the weak excluded middle ,
This weak form can be used as a foundation for an intermediate logic.
For a refined version of the failing law concerning existential statements, see the lesser limited principle of omniscience , which however is different from .
The validity of the other three De Morgan's laws remains true if negation is replaced by implication for some arbitrary constant predicate C, meaning that the above laws are still true in minimal logic.
Similarly to the above, the quantifier laws:
and
are tautologies even in minimal logic with negation replaced with implying a fixed , while the converse of the last law does not have to be true in general.
Further, one still has
but their inversion implies excluded middle, .
In computer engineering
De Morgan's laws are widely used in computer engineering and digital logic for the purpose of simplifying circuit designs.
In modern programming languages, due to the optimisation of compilers and interpreters, the performance differences between these options are negligible or completely absent.
See also
Conjunction/disjunction duality
Homogeneity (linguistics)
Isomorphism
List of Boolean algebra topics
List of set identities and relations
Positive logic
De Morgan algebra
References
External links
Duality in Logic and Language, Internet Encyclopedia of Philosophy.
Boolean algebra
Duality theories
Rules of inference
Articles containing proofs
Theorems in propositional logic | De Morgan's laws | [
"Mathematics"
] | 2,771 | [
"Boolean algebra",
"Mathematical structures",
"Proof theory",
"Mathematical logic",
"Rules of inference",
"Fields of abstract algebra",
"Theorems in propositional logic",
"Category theory",
"Duality theories",
"Geometry",
"Articles containing proofs",
"Theorems in the foundations of mathematic... |
64,685 | https://en.wikipedia.org/wiki/Post%20correspondence%20problem | The Post correspondence problem is an undecidable decision problem that was introduced by Emil Post in 1946. Because it is simpler than the halting problem and the Entscheidungsproblem it is often used in proofs of undecidability.
Definition of the problem
Let be an alphabet with at least two symbols. The input of the problem consists of two finite lists and of words over . A solution to this problem is a sequence of indices with and for all , such that
The decision problem then is to decide whether such a solution exists or not.
Alternative definition
This gives rise to an equivalent alternative definition often found in the literature, according to which any two homomorphisms with a common domain and a common codomain form an instance of the Post correspondence problem, which now asks whether there exists a nonempty word in the domain such that
.
Another definition describes this problem easily as a type of puzzle. We begin with a collection of dominos, each containing two strings, one on each side. An individual domino looks like
and a collection of dominos looks like
.
The task is to make a list of these dominos (repetition permitted) so that the string we get by reading off the symbols on the top is the same as the string of symbols on the bottom. This list is called a match. The Post correspondence problem is to determine whether a collection of dominos has a match.
For example, the following list is a match for this puzzle.
.
For some collections of dominos, finding a match may not be possible. For example, the collection
.
cannot contain a match because every top string is longer than the corresponding bottom string.
Example instances of the problem
Example 1
Consider the following two lists:
A solution to this problem would be the sequence (3, 2, 3, 1), because
Furthermore, since (3, 2, 3, 1) is a solution, so are all of its "repetitions", such as (3, 2, 3, 1, 3, 2, 3, 1), etc.; that is, when a solution exists, there are infinitely many solutions of this repetitive kind.
However, if the two lists had consisted of only and from those sets, then there would have been no solution (the last letter of any such α string is not the same as the letter before it, whereas β only constructs pairs of the same letter).
A convenient way to view an instance of a Post correspondence problem is as a collection of blocks of the form
there being an unlimited supply of each type of block. Thus the above example is viewed as
i = 1
i = 2
i = 3
where the solver has an endless supply of each of these three block types. A solution corresponds to some way of laying blocks next to each other so that the string in the top cells corresponds to the string in the bottom cells. Then the solution to the above example corresponds to:
i1 = 3
i2 = 2
i3 = 3
i4 = 1
Example 2
Again using blocks to represent an instance of the problem, the following is an example that has infinitely many solutions in addition to the kind obtained by merely "repeating" a solution.
1
2
3
In this instance, every sequence of the form (1, 2, 2, . . ., 2, 3) is a solution (in addition to all their repetitions):
1
2
2
2
3
Proof sketch of undecidability
The most common proof for the undecidability of PCP describes an instance of PCP that can simulate the computation of an arbitrary Turing machine on a particular input. A match will occur if and only if the input would be accepted by the Turing machine. Because deciding if a Turing machine will accept an input is a basic undecidable problem, PCP cannot be decidable either. The following discussion is based on Michael Sipser's textbook Introduction to the Theory of Computation.
In more detail, the idea is that the string along the top and bottom will be a computation history of the Turing machine's computation. This means it will list a string describing the initial state, followed by a string describing the next state, and so on until it ends with a string describing an accepting state. The state strings are separated by some separator symbol (usually written #). According to the definition of a Turing machine, the full state of the machine consists of three parts:
The current contents of the tape.
The current state of the finite-state machine which operates the tape head.
The current position of the tape head on the tape.
Although the tape has infinitely many cells, only some finite prefix of these will be non-blank. We write these down as part of our state. To describe the state of the finite control, we create new symbols, labelled q1 through qk, for each of the finite-state machine's k states. We insert the correct symbol into the string describing the tape's contents at the position of the tape head, thereby indicating both the tape head's position and the current state of the finite control. For the alphabet {0,1}, a typical state might look something like:
101101110q700110.
A simple computation history would then look something like this:
q0101#1q401#11q21#1q810.
We start out with this block, where x is the input string and q0 is the start state:
The top starts out "lagging" the bottom by one state, and keeps this lag until the very end stage. Next, for each symbol a in the tape alphabet, as well as #, we have a "copy" block, which copies it unmodified from one state to the next:
We also have a block for each position transition the machine can make, showing how the tape head moves, how the finite state changes, and what happens to the surrounding symbols. For example, here the tape head is over a 0 in state 4, and then writes a 1 and moves right, changing to state 7:
Finally, when the top reaches an accepting state, the bottom needs a chance to finally catch up to complete the match. To allow this, we extend the computation so that once an accepting state is reached, each subsequent machine step will cause a symbol near the tape head to vanish, one at a time, until none remain. If qf is an accepting state, we can represent this with the following transition blocks, where a is a tape alphabet symbol:
There are a number of details to work out, such as dealing with boundaries between states, making sure that our initial tile goes first in the match, and so on, but this shows the general idea of how a static tile puzzle can simulate a Turing machine computation.
The previous example
q0101#1q401#11q21#1q810.
is represented as the following solution to the Post correspondence problem:
{| class="wikitable" style="text-align:center;"
| bgcolor="#55FF83" width="40" | q8 1
|-
| bgcolor="#87E6FF" width="40" | q8
|}
...
Variants
Many variants of PCP have been considered. One reason is that, when one tries to prove undecidability of some new problem by reducing from PCP, it often happens that the first reduction one finds is not from PCP itself but from an apparently weaker version.
The problem may be phrased in terms of monoid morphisms f, g from the free monoid B∗ to the free monoid A∗ where B is of size n. The problem is to determine whether there is a word w in B+ such that f(w) = g(w).
The condition that the alphabet have at least two symbols is required since the problem is decidable if has only one symbol.
A simple variant is to fix n, the number of tiles. This problem is decidable if n ≤ 2, but remains undecidable for n ≥ 5. It is unknown whether the problem is decidable for 3 ≤ n ≤ 4.
The 'circular Post correspondence problem asks whether indexes can be found such that and are conjugate words, i.e., they are equal modulo rotation. This variant is undecidable.
One of the most important variants of PCP is the bounded Post correspondence problem, which asks if we can find a match using no more than k tiles, including repeated tiles. A brute force search solves the problem in time O(2k), but this may be difficult to improve upon, since the problem is NP-complete. Unlike some NP-complete problems like the boolean satisfiability problem, a small variation of the bounded problem was also shown to be complete for RNP, which means that it remains hard even if the inputs are chosen at random (it is hard on average over uniformly distributed inputs).
Another variant of PCP is called the marked Post Correspondence Problem, in which each must begin with a different symbol, and each must also begin with a different symbol. Halava, Hirvensalo, and de Wolf showed that this variation is decidable in exponential time. Moreover, they showed that if this requirement is slightly loosened so that only one of the first two characters need to differ (the so-called 2-marked Post Correspondence Problem), the problem becomes undecidable again.
The Post Embedding Problem is another variant where one looks for indexes such that is a (scattered) subword of . This variant is easily decidable since, when some solutions exist, in particular a length-one solution exists. More interesting is the Regular Post Embedding Problem, a further variant where one looks for solutions that belong to a given regular language (submitted, e.g., under the form of a regular expression on the set ). The Regular Post Embedding Problem is still decidable but, because of the added regular constraint, it has a very high complexity that dominates every multiply recursive function.
The Identity Correspondence Problem' (ICP) asks whether a finite set of pairs of words (over a group alphabet) can generate an identity pair by a sequence of concatenations. The problem is undecidable and equivalent to the following Group Problem: is the semigroup generated by a finite set of pairs of words (over a group alphabet) a group.
References
External links
Eitan M. Gurari. An Introduction to the Theory of Computation'', Chapter 4, Post's Correspondence Problem. A proof of the undecidability of PCP based on Chomsky type-0 grammars.
Dong, Jing. "The Analysis and Solution of a PCP Instance." 2012 National Conference on Information Technology and Computer Science. The paper describes a heuristic rule for solving some specific PCP instances.
Online PHP Based PCP Solver
PCP AT HOME
PCP - a nice problem
PCP solver in Java
Post Correspondence Problem
Theory of computation
Computability theory
Undecidable problems | Post correspondence problem | [
"Mathematics"
] | 2,292 | [
"Mathematical logic",
"Computational problems",
"Undecidable problems",
"Computability theory",
"Mathematical problems"
] |
16,408,009 | https://en.wikipedia.org/wiki/Cauchy%20momentum%20equation | The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
Main equation
In convective (or Lagrangian) form the Cauchy momentum equation is written as:
where
is the flow velocity vector field, which depends on time and space, (unit: )
is time, (unit: )
is the material derivative of , equal to , (unit: )
is the density at a given point of the continuum (for which the continuity equation holds), (unit: )
is the stress tensor, (unit: )
is a vector containing all of the accelerations caused by body forces (sometimes simply gravitational acceleration), (unit: )
is the divergence of stress tensor. (unit: )
Commonly used SI units are given in parentheses although the equations are general in nature and other units can be entered into them or units can be removed at all by nondimensionalization.
Note that only we use column vectors (in the Cartesian coordinate system) above for clarity, but the equation is written using physical components (which are neither covariants ("column") nor contravariants ("row") ). However, if we chose a non-orthogonal curvilinear coordinate system, then we should calculate and write equations in covariant ("row vectors") or contravariant ("column vectors") form.
After an appropriate change of variables, it can also be written in conservation form:
where is the momentum density at a given space-time point, is the flux associated to the momentum density, and contains all of the body forces per unit volume.
Differential derivation
Let us start with the generalized momentum conservation principle which can be written as follows: "The change in system momentum is proportional to the resulting force acting on this system". It is expressed by the formula:
where is momentum at time , and is force averaged over . After dividing by and passing to the limit we get (derivative):
Let us analyse each side of the equation above.
Right side
We split the forces into body forces and surface forces
Surface forces act on walls of the cubic fluid element. For each wall, the X component of these forces was marked in the figure with a cubic element (in the form of a product of stress and surface area e.g. with units ).
Adding forces (their X components) acting on each of the cube walls, we get:
After ordering and performing similar reasoning for components (they have not been shown in the figure, but these would be vectors parallel to the Y and Z axes, respectively) we get:
We can then write it in the symbolic operational form:
There are mass forces acting on the inside of the control volume. We can write them using the acceleration field (e.g. gravitational acceleration):
Left side
Let us calculate momentum of the cube:
Because we assume that tested mass (cube) is constant in time, so
Left and Right side comparison
We have
then
then
Divide both sides by , and because we get:
which finishes the derivation.
Integral derivation
Applying Newton's second law (th component) to a control volume in the continuum being modeled gives:
Then, based on the Reynolds transport theorem and using material derivative notation, one can write
where represents the control volume. Since this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation follows. The main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes .
Conservation form
The Cauchy momentum equation can also be put in the following form:
simply by defining:
where is the momentum density at the point considered in the continuum (for which the continuity equation holds), is the flux associated to the momentum density, and contains all of the body forces per unit volume. is the dyad of the velocity.
Here and have same number of dimensions as the flow speed and the body acceleration, while , being a tensor, has .
In the Eulerian forms it is apparent that the assumption of no deviatoric stress brings Cauchy equations to the Euler equations.
Convective acceleration
A significant feature of the Navier–Stokes equations is the presence of convective acceleration: the effect of time-independent acceleration of a flow with respect to space. While individual continuum particles indeed experience time dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Regardless of what kind of continuum is being dealt with, convective acceleration is a nonlinear effect. Convective acceleration is present in most flows (exceptions include one-dimensional incompressible flow), but its dynamic effect is disregarded in creeping flow (also called Stokes flow). Convective acceleration is represented by the nonlinear quantity , which may be interpreted either as or as , with the tensor derivative of the velocity vector . Both interpretations give the same result.
Advection operator vs tensor derivative
The convective acceleration can be thought of as the advection operator acting on the velocity field . This contrasts with the expression in terms of tensor derivative , which is the component-wise derivative of the velocity vector defined by , so that
Lamb form
The vector calculus identity of the cross product of a curl holds:
where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor .
Lamb in his famous classical book Hydrodynamics (1895), used this identity to change the convective term of the flow velocity in rotational form, i.e. without a tensor derivative:
where the vector is called the Lamb vector. The Cauchy momentum equation becomes:
Using the identity:
the Cauchy equation becomes:
In fact, in case of an external conservative field, by defining its potential :
In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes:
And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears due to a vector calculus identity of the triple scalar product:
If the stress tensor is isotropic, then only the pressure enters: (where is the identity tensor), and the Euler momentum equation in the steady incompressible case becomes:
In the steady incompressible case the mass equation is simply:
that is, the mass conservation for a steady incompressible flow states that the density along a streamline is constant. This leads to a considerable simplification of the Euler momentum equation:
The convenience of defining the total head for an inviscid liquid flow is now apparent:
in fact, the above equation can be simply written as:
That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant.
Irrotational flows
The Lamb form is also useful in irrotational flow, where the curl of the velocity (called vorticity) is equal to zero. In that case, the convection term in reduces to
Stresses
The effect of stress in the continuum flow is represented by the and terms; these are gradients of surface forces, analogous to stresses in a solid. Here is the pressure gradient and arises from the isotropic part of the Cauchy stress tensor. This part is given by the normal stresses that occur in almost all situations. The anisotropic part of the stress tensor gives rise to , which usually describes viscous forces; for incompressible flow, this is only a shear effect. Thus, is the deviatoric stress tensor, and the stress tensor is equal to:
where is the identity matrix in the space considered and the shear tensor.
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.
The divergence of the stress tensor can be written as
The effect of the pressure gradient on the flow is to accelerate the flow in the direction from high pressure to low pressure.
As written in the Cauchy momentum equation, the stress terms and are yet unknown, so this equation alone cannot be used to solve problems. Besides the equations of motion—Newton's second law—a force model is needed relating the stresses to the flow motion. For this reason, assumptions based on natural observations are often applied to specify the stresses in terms of the other flow variables, such as velocity and density.
External forces
The vector field represents body forces per unit mass. Typically, these consist of only gravity acceleration, but may include others, such as electromagnetic forces. In non-inertial coordinate frames, other "inertial accelerations" associated with rotating coordinates may arise.
Often, these forces may be represented as the gradient of some scalar quantity , with in which case they are called conservative forces. Gravity in the direction, for example, is the gradient of . Because pressure from such gravitation arises only as a gradient, we may include it in the pressure term as a body force . The pressure and force terms on the right-hand side of the Navier–Stokes equation become
It is also possible to include external influences into the stress term rather than the body force term. This may even include antisymmetric stresses (inputs of angular momentum), in contrast to the usually symmetrical internal contributions to the stress tensor.
Nondimensionalisation
In order to make the equations dimensionless, a characteristic length and a characteristic velocity need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained:
Substitution of these inverted relations in the Euler momentum equations yields:
and by dividing for the first coefficient:
Now defining the Froude number:
the Euler number:
and the coefficient of skin-friction or the one usually referred as 'drag coefficient' in the field of aerodynamics:
by passing respectively to the conservative variables, i.e. the momentum density and the force density:
the equations are finally expressed (now omitting the indexes):
Cauchy equations in the Froude limit (corresponding to negligible external field) are named free Cauchy equations:
and can be eventually conservation equations. The limit of high Froude numbers (low external field) is thus notable for such equations and is studied with perturbation theory.
Finally in convective form the equations are:
3D explicit convective forms
Cartesian 3D coordinates
For asymmetric stress tensors, equations in general take the following forms:
Cylindrical 3D coordinates
Below, we write the main equation in pressure-tau form assuming that the stress tensor is symmetrical ():
See also
Euler equations (fluid dynamics)
Navier–Stokes equations
Burnett equations
Chapman–Enskog expansion
Notes
References
Continuum mechanics
Eponymous equations of physics
Momentum
Partial differential equations | Cauchy momentum equation | [
"Physics",
"Mathematics"
] | 2,330 | [
"Equations of physics",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Eponymous equations of physics",
"Classical mechanics",
"Momentum",
"Moment (physics)"
] |
16,413,876 | https://en.wikipedia.org/wiki/Biotechnology%20consulting | Biotechnology consulting (or biotech consulting) refers to the practice of assisting organizations involved in research and commercialization of biotechnology in improving their methods and efficiency of production, and approaches to R&D. This assistance is usually provided in the form of specialized technological advice and sharing of expertise. Both start-up and established organizations would hire biotechnology consultants mainly to receive an independent and professional advice from key opinion leaders, individuals with extensive knowledge and experience in a particular area of biotechnology or biological sciences, and, often, to outsource their projects for implementation by well qualified individuals. Large management consulting firms would often be able to provide technological advice as well, depending on the qualifications of their consulting team. With the growth of pharmaceutical companies, biotechnology consulting has recently developed into an industry of its own and separated from the management consulting industry that traditionally also provides technological advice on R&D projects to various industries. This has also been fueled by the impact various conflicts of interests can have on commercialization when biotechnology organizations contract services from academic institutions or government scientists
This is exemplified by the successful emergence of many consulting companies dedicated exclusively to servicing the biotech industry. Occasionally, university professors and Phd students engage in biotechnology consulting, either commercially or free of charge.
A special type of consulting is patent strategy and management consulting or simply patent consulting which specifically emphasizes on the scope of patent rights versus R&D in industry. It also assets successful commercialization of patentable matter. The primary aim of patent consulting company is to assist various small, medium and large corporation in realizing their research project toward successful patent registration with minimized danger of infringement and other risks that patent registrations may be subjected to prior to commercialization. One example of patent consulting firm is The Patent World.
References
Consulting by type
Biotechnology organizations | Biotechnology consulting | [
"Engineering",
"Biology"
] | 353 | [
"Biotechnology organizations"
] |
16,420,547 | https://en.wikipedia.org/wiki/Characteristic%20velocity | Characteristic velocity or , or C-star is a measure of the combustion performance of a rocket engine independent of nozzle performance, and is used to compare different propellants and propulsion systems. c* should not be confused with c, which is the effective exhaust velocity related to the specific impulse by: . Specific impulse and effective exhaust velocity are dependent on the nozzle design unlike the characteristic velocity, explaining why C-star is an important value when comparing different propulsion system efficiencies. c* can be useful when comparing actual combustion performance to theoretical performance in order to determine how completely chemical energy release occurred. This is known as c*-efficiency.
Formula
is the characteristic velocity (m/s, ft/s)
is the chamber pressure (Pa, psi)
is the area of the throat (m2, in2)
is the mass flow rate of the engine (kg/s, slug/s)
is the specific impulse (s)
is the gravitational acceleration at sea-level (m/s2)
is the thrust coefficient
is the effective exhaust velocity (m/s)
is the specific heat ratio for the exhaust gases
is the gas constant per unit weight (J/kg-K)
is the chamber temperature (K)
References
Rocket Propulsion Elements, 7th Edition by George P. Sutton, Oscar Biblarz
Rocket Propulsion Elements, 9th Edition by George P. Sutton, Oscar Biblarz
Rocketry
Rocket propulsion
Aerospace engineering | Characteristic velocity | [
"Astronomy",
"Engineering"
] | 289 | [
"Rocketry",
"Rocketry stubs",
"Astronomy stubs",
"Aerospace engineering"
] |
16,423,973 | https://en.wikipedia.org/wiki/Direct-coupled%20transistor%20logic | Direct-coupled transistor logic (DCTL) is similar to resistor–transistor logic (RTL), but the input transistor bases are connected directly to the collector outputs without any base resistors. Consequently, DCTL gates have fewer components, are more economical, and are simpler to fabricate onto integrated circuits than RTL gates. Unfortunately, DCTL has much smaller signal levels, has more susceptibility to ground noise, and requires matched transistor characteristics. The transistors are also heavily overdriven; this is a good feature in that it reduces the saturation voltage of the output transistors, but it also slows the circuit down due to a high stored charge in the base. Gate fan-out is limited due to "current hogging": if the transistor base–emitter voltages () are not well matched, then the base–emitter junction of one transistor may conduct most of the input drive current at such a low base–emitter voltage that other input transistors fail to turn on.
DCTL is close to the simplest possible digital logic family, using close to fewest possible components per logical element.
A similar logic family, direct-coupled transistor–transistor logic, is faster than ECL.
John T. Wallmark and Sanford M. Marcus described direct-coupled transistor logic using JFETs. It was termed direct-coupled unipolar transistor logic (DCUTL). They published a variety of complex logic functions implemented as integrated circuits using JFETs, including complementary memory circuits.
DCTL in today's life
DCTL is a catalyst for other transistors which are very convenient to use. They were made 65 years ago and have many updated and different variations of them today. One of the more recent and used today is called transistor–transistor logic (TTL) and resistor–transistor logic (RTL). TTL functions similarly to DCTL, except DCTL has lower signal levels and is sensitive to ground noise, while TTL depends more on polarity. DCTLs are not used as much as they were in the past. RTL also focuses heavily on polarity, specifically being a bipolar transistor switch. They are still very important and changed the history of audio and are the fundamental stepping stones to creating higher-quality inventions.
Logical functions
A DCTL circuit has three logical functions: AND gating, OR gating, and signal inversion (NOT gating). Each of these functions is the building block of what creates the circuit board. An AND gate requires two or more inputs that are true for there to be an outcome. As an example, let's say that . If any of the inputs are 0, there will be no output. All inputs must be true for there to be an output. OR gating also requires two or more inputs, but unlike AND gating, only one of the inputs is required to be true. The NOT gate only needs a single input, so there could be an output. Therefore, if the single input is not true, there will be no output. With these three gates, many other logical functions can be made with them making the possibilities endless.
Other functions
A DCLT is known for doing three functions:
Inverters
Series gating
Parallel gating
Each of these functions makes the output voltage supply low, so it does not have a negative impact on the other circuits in the machine.
Inverters are also known as NOT gates which can be connected by collector resistors. For the next DCTL to turn on, there must be enough VCE(SAT) (saturation voltage) going through the previous circuit. If the VCE(SAT ) is too low, the next gate will not open up. If you want only a certain amount of circuits open, then the VCE(SAT) needs to be smaller than the next transistor VBE(ON) (voltage input) between the base and emitter. It depends on your desired function.
Series gating is a little different. If even one of the transistors is off, then the output voltage would end up being the supply voltage (VCC) at D. For the next stage to determine the voltage of D, that would entirely depend on the VBE(ON) (input voltage) of the next transistor. If all the transistors are on, then D would be closer to the ground, which can cause some complications. The next transistor has to be completely off for there to be no complication with the device.
Parallel gating is three transistors with individual inputs rather than a single input compared to the other functions. If the VBE(ON) is high, the voltage will go through a load resistor causing the voltage output to be low.
Disadvantages of using DCTL
Current hogging
Noise problem
One of the main disadvantages of using a DCTL is called current hogging. Current hogging is when two or more circuits are operating in parallel. The downside of this is that one of the circuits tends to do all the work and take up all the voltage (VBE) resulting in it overheating and then possibly breaking down. Since no two transmitters have the same voltage, this tends to happen. Due to this happening, inventors and engineers look for transistors with a small voltage output, which is something that a DCTL is known for, but the phenomenon can still happen.
The noise problem is related to the voltage noise. The reason that the phenomenon is such a great problem is due to circuits being incredibly sensitive to noise, since they operate at a fast speed and low voltage. Also with several transistors, the polarities pulse can cause unwanted transistors to turn on. Picked-up noises by connecting leads can also lead to more problems leading the device to not work.
Advantages of using DCTL
Simple circuit
Does not require much power to work
Does not take up too much space
Help limit voltage output
With these advantages, many incredible inventions have been created. Since it does not take up too much space and does not use too much power, this makes them very convenient to use. They can also limit the voltage output that other transistors may create and therefore lead to there being less issues with machines.
References
Digital electronics
Logic families | Direct-coupled transistor logic | [
"Engineering"
] | 1,308 | [
"Electronic engineering",
"Digital electronics"
] |
9,646,648 | https://en.wikipedia.org/wiki/Chlorine%20bombings%20in%20Iraq | Chlorine bombings in Iraq began as early as October 2004, when insurgents in Al Anbar province started using chlorine gas in conjunction with conventional vehicle-borne explosive devices.
The inaugural chlorine attacks in Iraq were described as poorly executed, probably because much of the chemical agent was rendered nontoxic by the heat of the accompanying explosives. Subsequent, more refined, attacks resulted in hundreds of injuries, but have proven not to be a viable means of inflicting massive loss of life. Their primary impact has therefore been to cause widespread panic, with large numbers of civilians suffering non life-threatening, but nonetheless highly traumatic, injuries.
Chlorine was used as a poison gas in World War I, but was delivered by artillery shell, unlike the modern stationary or car bombs. Still, its function as a weapon in both instances is similar. Low level exposure results in burning sensations to the eyes, nose and throat, usually accompanied by dizziness, nausea and vomiting. Higher levels of exposure can cause fatal lung damage; but because the gas is heavier than air it will not dissipate until well after an explosion, it is generally considered ineffective as an improvised chemical weapon.
Western media linking chlorine attacks to 'al Qaeda'
In February 2007, a U.S. military spokesman said that ‘al Qaeda propaganda material’ had been found at a factory for chlorine chemical weapons in Karma, east of Fallujah, which led press agency Reuters to the conclusion that that “chlorine bomb factory was al Qaeda's”.
Attacks
October 21, 2006: A car bomb carrying 12 120 mm mortar shells and two 100-pound chlorine tanks detonated, wounding three Iraqi policemen and a civilian in Ramadi.
January 28, 2007: A suicide bomber drove a dump truck carrying explosives and a chlorine tank into an emergency response unit compound in Ramadi. 16 people were killed by the explosives, but none by the chlorine.
February 19, 2007: A suicide bombing in Ramadi involving chlorine killed two Iraqi security forces and wounded 16 other people.
February 20, 2007: A bomb blew up a tanker carrying chlorine north of Baghdad, killing nine and emitting fumes that made 148 others ill, including 42 women and 52 children.
February 21, 2007: A pickup truck carrying chlorine gas cylinders exploded in Baghdad, killing at least five people and hospitalizing over 50.
March 16, 2007: Three separate suicide attacks on this day used chlorine. The first attack occurred at a checkpoint northeast of Ramadi when a truck bomb wounded one US service member and one Iraqi civilian. A second truck bomb detonated in Falluja, killing two policemen and leaving a hundred Iraqis showing signs of chlorine exposure. Forty minutes later, yet another chlorine-laden truck bomb exploded at the entrance to a housing estate south of Falluja, this time injuring 250 and according to some reports killing six.
March 28, 2007: Suicide bombers detonated a pair of truck bombs, one containing chlorine, as part of a sustained attack aimed at the Fallujah Government Center. The initial bombings along with a subsequent gun battle left 14 American forces and 57 Iraqi forces wounded.
April 6, 2007: A chlorine-laden suicide truck bomb detonated at a police checkpoint in Ramadi, leaving 27 dead. Thirty people were hospitalized with wounds from the explosion, while many more suffered breathing difficulties attributed to the chlorine gas.
April 25, 2007: A chlorine truck bomb detonated at a military checkpoint on the western outskirts of Baghdad, killing one Iraqi and wounding two others.
April 30, 2007: A tanker laden with chlorine exploded near a restaurant west of Ramadi, killing six people and wounding 10.
May 15, 2007: A chlorine bomb exploded in an open-air market in the village of Abu Sayda in Diyala province, killing 32 people and injuring 50.
May 20, 2007: A suicide truck bomber exploded his vehicle Sunday near an Iraqi police checkpoint outside Ramadi, Zangora district west of Ramadi, killing two police officers and wounding 11 others.
June 3, 2007: A car bomb exploded outside a U.S. military base in Diyala, unleashing a noxious cloud of chlorine gas that sickened at least 62 soldiers but caused no serious injuries.
See also
2007 Iraq cholera outbreak
Iraqi insurgency (2003–2011)
References
External links
Chlorine gas attacks hint at new enemy strategy, Associated Press
Concern over Iraqi chemical bombs, BBC News
U.S.: Iraq bomb factory raid nets deadly chlorine supply, CNN
War crimes in the Iraq War
Chlorine
Baghdad in the Iraq War
Fallujah in the Iraq War
Ramadi in the Iraq War
Chemical weapons attacks
Al-Qaeda activities in Iraq
Improvised explosive device bombings in Baghdad
Terrorist incidents in Baghdad in the 2000s
Terrorist incidents in Iraq in the 2000s
Mass murder in the 2000s
Chemical terrorism | Chlorine bombings in Iraq | [
"Chemistry"
] | 1,003 | [
"Chemical terrorism",
"Chemical weapons attacks",
"Chemical weapons"
] |
9,650,153 | https://en.wikipedia.org/wiki/One%20Watt%20Initiative | The One Watt Initiative is an energy-saving initiative by the International Energy Agency (IEA) to reduce standby power-use by any appliance to no more than one watt in 2010, and 0.5 watts in 2013, which has given rise to regulations in many countries and regions.
Standby power
Standby power, informally called vampire or phantom power, refers to the electricity consumed by many appliances when they are switched off or in standby mode. The typical standby power per appliance is low (typically from less than 1 to 25 W), but, when multiplied by the billions of appliances in houses and in commercial buildings, standby losses represent a significant fraction of total world electricity use. According to Alan Meier, a staff scientist at the Lawrence Berkeley National Laboratory, standby power before the One Watt Initiative proposals were implemented as regulations accounted for as much as 10% of household power consumption. A study in France found that standby power accounted for 7% of total residential consumption, and other studies put the proportion of consumption due to standby power at 13%.
The IEA estimated in 2007 that standby produced 1% of the world's carbon dioxide (CO2) emissions. To put the figure into context, total air travel contributes less than 3% of global CO2 emissions.
Standby power can be reduced by technological means, reducing power used without affecting functionality, and by changing users' operating procedures.
Policy
The One Watt Initiative was launched by the IEA in 1999 to ensure through international cooperation that by 2010 all new appliances sold in the world use only one watt in standby mode. This would reduce CO2 emissions by 50 million tons in the OECD countries alone by 2010; the equivalent to removing 18 million cars from the roads.
In 2001, US President George W. Bush issued Executive Order 13221, which states that every government agency, "when it purchases commercially available, off-the-shelf products that use external standby power devices, or that contain an internal standby power function, shall purchase products that use no more than one watt in their standby power consuming mode."
By 2005, South Korea and Australia had introduced the one watt benchmark in all new electrical devices, and according to the IEA other countries, notably Japan and China, had undertaken "strong measures" to reduce standby power use.
In July 2007, California's 2005 appliance standards came into effect, limiting external power supply standby power to 0.5 watts.
On 6 January 2010, the European Commission's EC Regulation 1275/2008 came into force regulating requirements for standby and "off mode" electric power consumption of electrical and electronic household and office equipment. The regulations mandate that from 6 January 2010 "off mode" and standby power shall not exceed 1 W, "standby-plus" power (providing information or status display in addition to possible reactivation function) shall not exceed 2 W (these figures are halved on 6 January 2013). Equipment must, where appropriate, provide off mode and/or standby mode when the equipment is connected to the mains power source.
See also
Carbon footprint
Energy conservation
Energy-Efficient Ethernet
Energy policy
Low-carbon economy
Standby power
Voltage optimisation
References
External links
Things that go blip in the night, Standby power and how to limit it, International Energy Agency/Organisation for Economic Co-operation and Development, Paris, 2001
International Energy Agency
Standby Power Home Page, Lawrence Berkeley National Laboratory California
Electric power
Energy conservation
International Energy Agency | One Watt Initiative | [
"Physics",
"Engineering"
] | 718 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
9,655,788 | https://en.wikipedia.org/wiki/Transuranic%20waste | Transuranic waste (TRU) is stated by U.S. regulations, and independent of state or origin, to be waste which has been contaminated with alpha emitting transuranic radionuclides possessing half-lives greater than 20 years and in concentrations greater than 100 nCi/g (3.7 MBq/kg).
Elements having atomic numbers greater than that of uranium are called transuranic. Elements within TRU are typically man-made and are known to contain americium-241 and several isotopes of plutonium. Because of the elements' longer half-lives, TRU is disposed of more cautiously than low level waste and intermediate level waste. In the U.S. it is a byproduct of weapons production, nuclear research and power production, and consists of protective gear, tools, residue, debris and other items contaminated with small amounts of radioactive elements (mainly plutonium).
Under U.S. law, TRU is further categorized into "contact-handled" (CH) and "remote-handled" (RH) on the basis of the radiation field measured on the waste container's surface. CH TRU has a surface dose rate not greater than 2 mSv per hour (200 mrem/h), whereas RH TRU has rates of 2 mSv/h or higher. CH TRU has neither the high radioactivity of high level waste, nor its high heat generation. In contrast, RH TRU can be highly radioactive, with surface dose rates up to 10 Sv/h (1000 rem/h).
The United States currently permanently disposes of TRU generated from defense nuclear activities at the Waste Isolation Pilot Plant, a deep geologic repository.
Other countries do not include this category, favoring variations of High, Medium/Intermediate, and Low Level waste.
References
External links
Final Environmental Assessment for Actinide Chemistry and Repository Science Laboratory - Citing a DOE TRU Definition
US Department of Energy's page on the Waste Isolation Pilot Plant (WIPP)
Radioactive waste | Transuranic waste | [
"Physics",
"Chemistry",
"Technology"
] | 419 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"Nuclear physics",
"Hazardous waste",
"Radioactive waste"
] |
7,440,465 | https://en.wikipedia.org/wiki/Agitated%20Nutsche%20Filter | Agitated Nutsche filter (ANF) is a filtration technique used in applications such as dye, paint, and pharmaceutical production and waste water treatment. Safety requirements and environmental concerns due to solvent evaporation led to the development of this type of filter wherein filtration under vacuum or pressure can be carried out in closed vessels and solids can be discharged straightaway into a dryer.
Filter features
A typical unit consists of a dished vessel with a perforated plate. The entire vessel can be kept at the desired temperature by using a limpet jacket, jacketed bottom dish and stirrer (blade and shaft) through which heat transfer media can flow. The vessel can be made completely leak-proof for vacuum or pressure service. Its used for Multiple Processes like Solid Liquid Separation, Agitating / Washing, Resuspending / Mixing Extraction, Crystallizing, Drying can be performed within a closed system.
Nutsche filter disc
The filter disc is the bottom porous plate of the nutsche filter. The filter disc retains the solids and lets the liquid/ gas passing through. It is the main filtration component of the nutsche filter.
Types of the filter disc:
Perforated support plate with filter mesh (metallic or non-metallic)
Welded multi-layer mesh
Sintered wire mesh
Agitator
A multipurpose agitator is the unique feature of this system. The agitator performs a number of operations through movement in axes both parallel and perpendicular to the shaft.
Important points
Slurry contents can be kept liquidized using heat and agitation until most of the liquid is filtered through.
When filtration is complete, the cake develops cracks causing upsets in the vacuum operation. This hinders removal of mother liquor. The agitator can be used to maintain a uniform cake.
The cake can be washed after filtration by re-slurrying the cake.
After washing, the mother liquor can be refiltered. The cake can then be discharged by lowering the agitator and rotating it in such a manner that it brings all the cake towards the discharge port.
Agitator filters are suitable for filtration of liquids with a high solid content. The liquid is separated mechanically using a permeable layer/ filter medium under vacuum or pressure.
A special height-adjustable agitator design improvises the degree of filtration effectiveness and enables the mechanical discharge of the solid. An even filter cake forms on the horizontal base of the filter, which ensures the best possible recovery of the solid
Power pack
A hydraulic power pack or hydraulic power unit is a unit attached to the ANF's agitator system, discharge valve and bottom removal (for cleaning). It consists of an oil tank on which a pump is provided for circulating high-pressure oil through a control valve system and to hydraulic cylinders. These cylinders are provided for vertical movement of the agitator, discharge product and sometimes detach the bottom to clean the filter before changing the product. Operating pressure of the oil varies from 2 kg/cm to 80 kg/cm (200 kPa to 8 MPa).
Materials of construction
Agitated Nutsche filters can be fabricated in materials like Hastelloy C-276, C-22, stainless steel, mild steel, and mild steel with rubber lining as per service requirements. Recently, agitated Nutsche filters have been fabricated out of polypropylene fibre-reinforced plastic (PPFRP). Also, Nutsche filters made from Borosilicate glass 3.3 find use in applications where visibility of process are important along with chemical inertness.
Advantages
Vacuum or pressure filtration possible.
Inert gas atmosphere can be maintained.
Minimal contamination of the cake.
Very high solvent recovery.
Considerable saving in manpower.
Solvents are in closed systems, so no toxic vapors are let off in the atmosphere.
Personal safety is maintained and heat transfer surfaces can be provided to maintain filtration temperature.
Commercial uses
The Agitated Nutsche Filter Dryer (ANFD) is specifically engineered to meet the stringent demands of the pharmaceutical and fine chemical industries for efficient solids washing, separation, and drying under challenging conditions. This versatile filter-dryer system allows for both filtration and drying processes to be completed within the same vessel, significantly improving process efficiency.
ANFD systems are particularly suited for liquids with a high solid content, where the liquid phase is mechanically separated through a permeable filter medium under vacuum or pressure. The height-adjustable agitator optimizes filtration, enabling uniform filter cake formation on the horizontal base of the filter, ensuring superior solid recovery. The system also supports the mechanical discharge of solids, making it highly efficient for production.
References
Filters | Agitated Nutsche Filter | [
"Chemistry",
"Engineering"
] | 959 | [
"Chemical equipment",
"Filtration",
"Filters"
] |
7,441,771 | https://en.wikipedia.org/wiki/Galloway%20Forest%20Park | Galloway Forest Park is a forest park operated by Forestry and Land Scotland, principally covering woodland in the historic counties of Kirkcudbrightshire and Wigtownshire in the administrative area of Dumfries and Galloway. It is claimed to be the largest forest in the UK. The park was granted Dark Sky Park status ("Galloway Forest Dark Sky Park") in November 2009, being the first area in the UK to be so designated.
The park, established in 1947, covers and receives over 800,000 visitors per year. The three visitor centres at Glen Trool, Kirroughtree, and Clatteringshaws receive around 150,000 each year. Much of the Galloway Hills lie within the boundaries of the park and there is good but rough hillwalking and also some rock climbing and ice-climbing within the park. Within or near the boundaries of the park are several well developed mountain bike tracks, forming part of the 7stanes project.
As well as catering for recreation, the park includes economically valuable woodland, producing 500,000 tons of timber per year.
Galloway Forest Park and the people who visit it and work in it were the subject of a six-part BBC One documentary series aired in early 2018 entitled "The Forest".
Dark sky
In November 2009 the International Dark-Sky Association conferred Dark Sky Park status on the Galloway Forest Park, the first area in the UK to be so designated.
The Scottish Dark Sky Observatory, near Dalmellington, is located within the northern edge of the Galloway Forest Dark Sky Park. The observatory was partly funded by the Scottish Government and opened in 2012. It suffered a devastating fire during the early hours of 23 June 2021, resulting in complete destruction of the observatory. The fire is currently being treated as suspicious.
Alexander Murray
The park is also home to the ruins of the birthplace of Alexander Murray, the son of a shepherd and farm labourer. Murray was self-taught on multiple languages, and eventually went on to become professor of Oriental languages at University of Edinburgh. A short distance away, high on a hillside, is Murray's Monument, which was erected in his memory in 1835.
Typhoon crash
On 18 March 1944, 22-year-old Canadian pilot Kenneth Mitchell crashed his Hawker Typhoon aircraft in the forest (location here). The impact killed him instantly. Mitchell was in training in preparation for his squadron's role fighting the German V-1 flying bombs in the Second World War. On 18 March 2009, 65 years to the day since the crash, a commemorative plaque was installed on a mortared cairn at the crash site, where pieces of the aircraft still remain. Mitchell was buried in Ayr Cemetery, Ayr.
See also
Loch Macaterick
References
External links
Recreation at Galloway Forest Park at the Forestry and Land Scotland website
'Activity Tourism' from the Countryside Recreation Network
Information on Hill Walking in the Galloway Hills
Rock and Ice climbing in the Galloway Hills
7 Stanes
7 Stanes - Galloway Forest Park
Forests and woodlands of Scotland
Country parks in Scotland
Dark-sky preserves in the United Kingdom
Parks in Dumfries and Galloway
Forest parks of Scotland | Galloway Forest Park | [
"Astronomy"
] | 628 | [
"Dark-sky preserves in the United Kingdom",
"Dark-sky preserves"
] |
11,125,044 | https://en.wikipedia.org/wiki/Direct%20methods%20%28crystallography%29 | In crystallography, direct methods are a family of methods for estimating the phases of the Fourier transform of the scattering density from the corresponding magnitudes. The methods generally exploit constraints or statistical correlations between the phases of different Fourier components that result from the fact that the scattering density must be a positive real number.
In two dimensions, it is relatively easy to solve the phase problem directly, but not so in three dimensions. The key step was taken by Hauptman and Karle, who developed a practical method to employ the Sayre equation for which they were awarded the 1985 Nobel prize in Chemistry. The Nobel Prize citation was "for their outstanding achievements in the development of direct methods for the determination of crystal structures."
At present, direct methods are the preferred method for phasing crystals of small molecules having up to 1000 atoms in the asymmetric unit. However, they are generally not feasible by themselves for larger molecules such as proteins.
Several software packages implement direct methods.
See also
Direct methods (electron microscopy)
Phase problem
X-ray crystallography
References
Crystallography | Direct methods (crystallography) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 219 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
11,127,278 | https://en.wikipedia.org/wiki/Pair-instability%20supernova | A pair-instability supernova is a type of supernova predicted to occur when pair production, the production of free electrons and positrons in the collision between atomic nuclei and energetic gamma rays, temporarily reduces the internal radiation pressure supporting a supermassive star's core against gravitational collapse. This pressure drop leads to a partial collapse, which in turn causes greatly accelerated burning in a runaway thermonuclear explosion, resulting in the star being blown completely apart without leaving a stellar remnant behind.
Pair-instability supernovae can only happen in stars with a mass range from around 130 to 250 solar masses and low to moderate metallicity (low abundance of elements other than hydrogen and helium – a situation common in Population III stars).
Physics
Photon emission
Photons given off by a body in thermal equilibrium have a black-body spectrum with an energy density proportional to the fourth power of the temperature, as described by the Stefan–Boltzmann law. Wien's law states that the wavelength of maximum emission from a black body is inversely proportional to its temperature. Equivalently, the frequency, and the energy, of the peak emission is directly proportional to the temperature.
Photon pressure in stars
In very massive, hot stars with interior temperatures above about (), photons produced in the stellar core are primarily in the form of very high-energy gamma rays. The pressure from these gamma rays fleeing outward from the core helps to hold up the upper layers of the star against the inward pull of gravity. If the level of gamma rays (the energy density) is reduced, then the outer layers of the star will begin to collapse inwards.
Gamma rays with sufficiently high energy can interact with nuclei, electrons, or one another. One of those interactions is to form pairs of particles, such as electron-positron pairs, and these pairs can also meet and annihilate each other to create gamma rays again, all in accordance with Albert Einstein's mass-energy equivalence equation
At the very high density of a large stellar core, pair production and annihilation occur rapidly. Gamma rays, electrons, and positrons are overall held in thermal equilibrium, ensuring the star's core remains stable. By random fluctuation, the sudden heating and compression of the core can generate gamma rays energetic enough to be converted into an avalanche of electron-positron pairs. This reduces the pressure. When the collapse stops, the positrons find electrons and the pressure from gamma rays is driven up, again. The population of positrons provides a brief reservoir of new gamma rays as the expanding supernova's core pressure drops.
Pair-instability
As temperatures and gamma ray energies increase, more and more gamma ray energy is absorbed in creating electron–positron pairs. This reduction in gamma ray energy density reduces the radiation pressure that resists gravitational collapse and supports the outer layers of the star. The star contracts, compressing and heating the core, thereby increasing the rate of energy production. This increases the energy of the gamma rays that are produced, making them more likely to interact, and so increases the rate at which energy is absorbed in further pair production. As a result, the stellar core loses its support in a runaway process, in which gamma rays are created at an increasing rate; but more and more of the gamma rays are absorbed to produce electron–positron pairs, and the annihilation of the electron–positron pairs is insufficient to halt further contraction of the core. Finally, the thermal runaway ignites detonation fusion of oxygen and heavier elements. When the temperature reaches the level when electrons and positrons carry the same energy fraction as gamma-rays, pair production cannot increase any further, it is balanced by annihilation. Contraction no longer accelerates, but the core now produces much more energy than prior to collapse, and this results in a supernova: the outer layers of the star are blown away by sudden large increase of power production in the core. Calculations suggest that so much of the outer layers are lost that the very hot core itself is no longer under sufficient pressure to keep it intact, and it is completely disrupted too.
Stellar susceptibility
For a star to undergo pair-instability supernova, the increased creation of positron/electron pairs by gamma ray collisions must reduce outward pressure enough for inward gravitational pressure to overwhelm it. High rotational speed and/or metallicity can prevent this. Stars with these characteristics still contract as their outward pressure drops, but unlike their slower or less metal-rich cousins, these stars continue to exert enough outward pressure to prevent gravitational collapse.
Stars formed by collision mergers having a metallicity Z between 0.02 and 0.001 may end their lives as pair-instability supernovae if their mass is in the appropriate range.
Very large high-metallicity stars are probably unstable due to the Eddington limit, and would tend to shed mass during the formation process.
Stellar behavior
Several sources describe the stellar behavior for large stars in pair-instability conditions.
Below 100 solar masses
Gamma rays produced by stars of fewer than 100 or so solar masses are not energetic enough to produce electron-positron pairs. Some of these stars will undergo supernovae of a different type at the end of their lives, but the causative mechanisms do not involve pair-instability.
100 to 130 solar masses
These stars are large enough to produce gamma rays with enough energy to create electron-positron pairs, but the resulting net reduction in counter-gravitational pressure is insufficient to cause the core-overpressure required for supernova. Instead, the contraction caused by pair-creation provokes increased thermonuclear activity within the star that repulses the inward pressure and returns the star to equilibrium. It is thought that stars of this size undergo a series of these pulses until they shed sufficient mass to drop below 100 solar masses, at which point they are no longer hot enough to support pair-creation. Pulsing of this nature may have been responsible for the variations in brightness experienced by Eta Carinae in 1843, though this explanation is not universally accepted.
130 to 250 solar masses
For very high-mass stars, with mass at least 130 and up to perhaps roughly 250 solar masses, a true pair-instability supernova can occur. In these stars, the first time that conditions support pair production instability, the situation runs out of control. The collapse proceeds to efficiently compress the star's core; the overpressure is sufficient to allow runaway nuclear fusion to burn it in several seconds, creating a thermonuclear explosion. With more thermal energy released than the star's gravitational binding energy, it is completely disrupted; no black hole or other remnant is left behind. This is predicted to contribute to a "mass gap" in the mass distribution of stellar black holes. (This "upper mass gap" is to be distinguished from a suspected "lower mass gap" in the range of a few solar masses.)
In addition to the immediate energy release, a large fraction of the star's core is transformed to nickel-56, a radioactive isotope which decays with a half-life of 6.1 days into cobalt-56. Cobalt-56 has a half-life of 77 days and then further decays to the stable isotope iron-56 (see Supernova nucleosynthesis). For the hypernova SN 2006gy, studies indicate that perhaps 40 solar masses of the original star were released as Ni-56, almost the entire mass of the star's core regions. Collision between the exploding star core and gas it ejected earlier, and radioactive decay, release most of the visible light.
250 solar masses or more
A different reaction mechanism, photodisintegration, follows the initial pair-instability collapse in stars of at least 250 solar masses. This endothermic (energy-absorbing) reaction absorbs the excess energy from the earlier stages before the runaway fusion can cause a hypernova explosion; the star then collapses completely into a black hole.
Appearance
Luminosity
Pair-instability supernovae are popularly thought to be highly luminous. This is only the case for the most massive progenitors since the luminosity depends strongly on the ejected mass of radioactive 56Ni. They can have peak luminosities of over 1037 W, brighter than type Ia supernovae, but at lower masses peak luminosities are less than 1035 W, comparable to or less than typical type II supernovae.
Spectrum
The spectra of pair-instability supernovae depend on the nature of the progenitor star. Thus they can appear as type II or type Ib/c supernova spectra. Progenitors with a significant remaining hydrogen envelope will produce a type II supernova, those with no hydrogen but significant helium will produce a type Ib, and those with no hydrogen and virtually no helium will produce a type Ic.
Light curves
In contrast to the spectra, the light curves are quite different from the common types of supernova. The light curves are highly extended, with peak luminosity occurring months after onset. This is due to the extreme amounts of 56Ni expelled, and the optically dense ejecta, as the star is entirely disrupted.
Remnant
Pair-instability supernovae completely destroy the progenitor star and do not leave behind a neutron star or black hole. The entire mass of the star is ejected, so a nebular remnant is produced and many solar masses of heavy elements are ejected into interstellar space.
Pair-instability supernovae candidates
Some supernovae candidates for classification as pair-instability supernovae include:
SN 2006gy
SN 2007bi,
SN 2213-1745
SN 1000+0216,
SN 2010mb
OGLE14-073,
SN 2016aps
SN 2016iet,
SN 2018ibb,
See also
Pair production
Pulsational pair-instability supernova
Thermal runaway
Type Ia supernova, "thermonuclear supernova"
Intermediate-mass black hole
References
External links
List of possible pair-instability supernovae at The Open Supernova Catalog .
Supernovae
Hypernovae
de:Supernova#Paarinstabilitätssupernova | Pair-instability supernova | [
"Chemistry",
"Astronomy"
] | 2,077 | [
"Supernovae",
"Astronomical events",
"Hypernovae",
"Explosions"
] |
11,127,518 | https://en.wikipedia.org/wiki/Chow%27s%20lemma | Chow's lemma, named after Wei-Liang Chow, is one of the foundational results in algebraic geometry. It roughly says that a proper morphism is fairly close to being a projective morphism. More precisely, a version of it states the following:
If is a scheme that is proper over a noetherian base , then there exists a projective -scheme and a surjective -morphism that induces an isomorphism for some dense open
Proof
The proof here is a standard one.
Reduction to the case of irreducible
We can first reduce to the case where is irreducible. To start, is noetherian since it is of finite type over a noetherian base. Therefore it has finitely many irreducible components , and we claim that for each there is an irreducible proper -scheme so that has set-theoretic image and is an isomorphism on the open dense subset of . To see this, define to be the scheme-theoretic image of the open immersion
Since is set-theoretically noetherian for each , the map is quasi-compact and we may compute this scheme-theoretic image affine-locally on , immediately proving the two claims. If we can produce for each a projective -scheme as in the statement of the theorem, then we can take to be the disjoint union and to be the composition : this map is projective, and an isomorphism over a dense open set of , while is a projective -scheme since it is a finite union of projective -schemes. Since each is proper over , we've completed the reduction to the case irreducible.
can be covered by finitely many quasi-projective -schemes
Next, we will show that can be covered by a finite number of open subsets so that each is quasi-projective over . To do this, we may by quasi-compactness first cover by finitely many affine opens , and then cover the preimage of each in by finitely many affine opens each with a closed immersion in to since is of finite type and therefore quasi-compact. Composing this map with the open immersions and , we see that each is a closed subscheme of an open subscheme of . As is noetherian, every closed subscheme of an open subscheme is also an open subscheme of a closed subscheme, and therefore each is quasi-projective over .
Construction of and
Now suppose is a finite open cover of by quasi-projective -schemes, with an open immersion in to a projective -scheme. Set , which is nonempty as is irreducible. The restrictions of the to define a morphism
so that , where is the canonical injection and is the projection. Letting denote the canonical open immersion, we define , which we claim is an immersion. To see this, note that this morphism can be factored as the graph morphism (which is a closed immersion as is separated) followed by the open immersion ; as is noetherian, we can apply the same logic as before to see that we can swap the order of the open and closed immersions.
Now let be the scheme-theoretic image of , and factor as
where is an open immersion and is a closed immersion. Let and be the canonical projections.
Set
We will show that and satisfy the conclusion of the theorem.
Verification of the claimed properties of and
To show is surjective, we first note that it is proper and therefore closed. As its image contains the dense open set , we see that must be surjective. It is also straightforward to see that induces an isomorphism on : we may just combine the facts that and is an isomorphism on to its image, as factors as the composition of a closed immersion followed by an open immersion . It remains to show that is projective over .
We will do this by showing that is an immersion. We define the following four families of open subschemes:
As the cover , the cover , and we wish to show that the also cover . We will do this by showing that for all . It suffices to show that is equal to as a map of topological spaces. Replacing by its reduction, which has the same underlying topological space, we have that the two morphisms are both extensions of the underlying map of topological space , so by the reduced-to-separated lemma they must be equal as is topologically dense in . Therefore for all and the claim is proven.
The upshot is that the cover , and we can check that is an immersion by checking that is an immersion for all . For this, consider the morphism
Since is separated, the graph morphism is a closed immersion and the graph is a closed subscheme of ; if we show that factors through this graph (where we consider via our observation that is an isomorphism over from earlier), then the map from must also factor through this graph by construction of the scheme-theoretic image. Since the restriction of to is an isomorphism onto , the restriction of to will be an immersion into , and our claim will be proven. Let be the canonical injection ; we have to show that there is a morphism so that . By the definition of the fiber product, it suffices to prove that , or by identifying and , that . But and , so the desired conclusion follows from the definition of and is an immersion. Since is proper, any -morphism out of is closed, and thus is a closed immersion, so is projective.
Additional statements
In the statement of Chow's lemma, if is reduced, irreducible, or integral, we can assume that the same holds for . If both and are irreducible, then is a birational morphism.
References
Bibliography
Theorems in algebraic geometry
Zhou, Weiliang | Chow's lemma | [
"Mathematics"
] | 1,201 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
11,127,840 | https://en.wikipedia.org/wiki/Clonostachys%20rosea%20f.%20rosea | Clonostachys rosea f. rosea, also known as Gliocladium roseum, is a species of fungus in the family Bionectriaceae. It colonizes living plants as an endophyte, digests material in soil as a saprophyte and is also known as a parasite of other fungi and of nematodes. It produces a wide range of volatile organic compounds which are toxic to organisms including other fungi, bacteria, and insects, and is of interest as a biological pest control agent.
Biological control
Clonostachys rosea protects plants against Botrytis cinerea ("grey mold") by suppressing spore production. Its hyphae have been found to coil around, penetrate, and grow inside the hyphae and conidia of B. cinerea.
Nematodes are infected by C. rosea when the fungus' conidia attach to their cuticle and germinate, going on to produce germ tubes which penetrate the host's body and kill it.
Biofuels
In 2008 an isolate of Clonostachys rosea (NRRL 50072) was identified as producing a series of volatile compounds that are similar to some existing fuels, including diesel. However, the taxonomy of this isolate was later revised to Ascocoryne sarcoides.
See also
Entomopathogenic fungus
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Fungi described in 1999
Bionectriaceae
Anaerobic digestion
Forma taxa | Clonostachys rosea f. rosea | [
"Chemistry",
"Engineering"
] | 321 | [
"Water technology",
"Anaerobic digestion",
"Environmental engineering"
] |
11,128,198 | https://en.wikipedia.org/wiki/Nearly%20neutral%20theory%20of%20molecular%20evolution | The nearly neutral theory of molecular evolution is a modification of the neutral theory of molecular evolution that accounts for the fact that not all mutations are either so deleterious such that they can be ignored, or else neutral. Slightly deleterious mutations are reliably purged only when their selection coefficient are greater than one divided by the effective population size. In larger populations, a higher proportion of mutations exceed this threshold for which genetic drift cannot overpower selection, leading to fewer fixation events and so slower molecular evolution.
The nearly neutral theory was proposed by Tomoko Ohta in 1973. The population-size-dependent threshold for purging mutations has been called the "drift barrier" by Michael Lynch, and used to explain differences in genomic architecture among species.
Origins
According to the neutral theory of molecular evolution, the rate at which molecular changes accumulate between species should be equal to the rate of neutral mutations and hence relatively constant across species. However, this is a per-generation rate. Since larger organisms have longer generation times, the neutral theory predicts that their rate of molecular evolution should be slower. However, molecular evolutionists found that rates of protein evolution were fairly independent of generation time.
Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that if most amino acid substitutions are slightly deleterious, this would increase the rate of effectively neutral mutation rate in small populations, which could offset the effect of long generation times. However, because noncoding DNA substitutions tend to be more neutral, independent of population size, their rate of evolution is correctly predicted to depend on population size / generation time, unlike the rate of non-synonymous changes.
In this case, the faster rate of neutral evolution in proteins expected in small populations (due to a more lenient threshold for purging deleterious mutations) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations) In 1973, Ohta published a short letter in Nature suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral.
Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing. As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research.
Theory
The rate of substitution, is
,
where is the mutation rate, is the generation time, and is the effective population size. The last term is the probability that a new mutation will become fixed. Early models assumed that is constant between species, and that increases with . Kimura’s equation for the probability of fixation in a haploid population gives:
,
where is the selection coefficient of a mutation. When (completely neutral), , and when (extremely deleterious), decreases almost exponentially with . Mutations with are called nearly neutral mutations. These mutations can fix in small- populations through genetic drift. In large- populations, these mutations are purged by selection. If nearly neutral mutations are common, then the proportion for which is dependent on
The effect of nearly neutral mutations can depend on fluctuations in . Early work used a “shift model” in which can vary between generations but the mean fitness of the population is reset to zero after fixation. This basically assumes the distribution of is constant (in this sense, the argument in the previous paragraphs can be regarded as based on the “shift model”). This assumption can lead to indefinite improvement or deterioration of protein function. Alternatively, the later “fixed model” fixes the distribution of mutations’ effect on protein function, but allows the mean fitness of population to evolve. This allows the distribution of to change with the mean fitness of population.
The “fixed model” provides a slightly different explanation for the rate of protein evolution. In large populations, advantageous mutations are quickly picked up by selection, increasing the mean fitness of the population. In response, the mutation rate of nearly neutral mutations is reduced because these mutations are restricted to the tail of the distribution of selection coefficients.
The “fixed model” expands the nearly neutral theory. Tachida classified evolution under the “fixed model” based on the product of and the variance in the distribution of : a large product corresponds to adaptive evolution, an intermediate product corresponds to nearly neutral evolution, and a small product corresponds to almost neutral evolution. According to this classification, slightly advantageous mutations can contribute to nearly neutral evolution.
The "drift barrier" theory
Michael Lynch has proposed that variation in the ability to purge slightly deleterious mutations (i.e. variation in ) can explain variation in genomic architecture among species, e.g. the size of the genome, or the mutation rate. Specifically, larger populations will have lower mutation rates, more streamlined genomic architectures, and generally more finely tuned adaptations. However, if robustness to the consequences of each possible error in processes such as transcription and translation substantially reduces the cost of making such errors, larger populations might evolve lower rates of global proofreading, and hence have higher rates of error. This may explain why Escherichia coli has higher rates of transcription error than Saccharomyces cerevisiae. This is supported by the fact that transcriptional error rates in E. coli depend on protein abundance (which is responsible for modulating the locus-specific strength of selection), but do so only for high-error-rate C to U deamination errors in S. cerevisiae.
See also
History of molecular evolution
References
External links
The Nearly Neutral Theory of Molecular Evolution - Perspectives on Molecular Evolution
Molecular evolution
Population genetics
Neutral theory | Nearly neutral theory of molecular evolution | [
"Chemistry",
"Biology"
] | 1,306 | [
"Evolutionary processes",
"Molecular evolution",
"Neutral theory",
"Molecular biology",
"Non-Darwinian evolution",
"Biology theories"
] |
11,130,631 | https://en.wikipedia.org/wiki/Avidyne%20Entegra | Avidyne Entegra is an integrated aircraft instrumentation system ("glass cockpit"), produced by Avidyne Corporation, consisting of a primary flight display (PFD), and multi-function display (MFD). Cirrus became the first customer of the Entegra system and began offering it on the SR20 and SR22 aircraft in 2003 as the first integrated flight deck for light general aviation (GA). The original Entegra system was designed to use third-party components such as a GPS from Garmin and an autopilot system from S-TEC Corporation.
One of the advantages of these glass flight deck systems is upgradeability. Avidyne has demonstrated this with a continuous stream of hardware and software upgrades, including:
2004: Added CMax Electronic Charts and first to certify XM datalink for light GA.
2005 Added Primary Engine Instrumentation on PFD.
2006: Introduced Release 6, which added Flight Director, V-Speed & Heading on ADI, additional datalink weather products on the MFD, and support for the USB memory-stick data loader.
2007: Introduced Release 7, which added support for WAAS/LPV Approach guidance among other things.
2008: Introduced Release 8, which expanded weather product for Canadian, Mexico and Caribbean (METARS, TAFs, Color Lightning).
2009: Release 9, a hardware and software upgrade that was certified in April 2009.
Also Avidyne has introduced the DFC90 digital and attitude based autopilot for Entegra installations that replaces the S-TEC55X rate based autopilot and has advanced features like a "straight & level" button, envelope protection and IAS climb. To install the DFC90 A/P (which is a slide-in replacement for the 55X) the PFD has to be upgraded to WAAS standard.
With the introduction of the IFD navigators product range (IFD440/540/550) Entegra 8.x was not dependent of Garmin navigators anymore. The IFD440 COM/GPS/WAAS Navcoms are direct slide in replacements for the GNS430W navigators. Original Entegra systems with the non-WAAS GNS430 navigators need to get a PFD HW and SW upgrade before they can utilize 430W or IFD440 navigators which are capable of GPS/WAAS 3D approaches like LNAV/VNAV or LPV.
Navdata and Approach Charts on the MFD can be updated via the USB port on the MFD (which is not suitable for charging).
System Redundancy
Entegra Release 9 system was designed with a fully redundant dual-databus architecture that eliminates traditional "Reversionary Modes."
A typical Entegra Release 9 installation features two large-format IFD5000 Integrated Flight Displays (IFD), which are fully interchangeable for use as PFD or MFD. Since each IFD5000 is fully capable of performing the functions of the other, no unfamiliar or limited reversionary modes are required . In the event of a display failure, the remaining IFD5000 continues to operate as either display format with no loss of functionality.
Some competing glass flight deck systems have limited redundancy, lose critical functionality such as datalink weather, traffic, or even autopilot, and their failure modes force the pilot to learn composite display symbology and "reversionary modes."
GA Glass history
Avidyne was first to certify big glass for light GA with the 2003 launch of Entegra in Cirrus aircraft. This is considered a "first generation" big-glass system that integrates the six 3-inch instruments (6-pack) into a more usable package, along with an exceptionally reliable Air Data and Heading Reference System (ADAHRS) that replaces the "spinning mass" attitude and directional gyros. Entegra Release 8 still relies on a 'federated' radio stack (dual G430s) for GPS/NAV/COM capability, as well as audio and transponder.
Entegra R9 was meant to replace the original Entegra system in Cirrus Aircraft in 2007 but Cirrus went with the new G1000/Perspective system instead. For a short time Cirrus aircraft could be ordered with either Avidyne or Garmin avionics, today Garmin G1000 is the only option, it became the General Aviation market leader in glass cockpits. Avidyne still supports R8 (Entgra) and R9 systems and gave existing Entegra customers an upgrade path with the introduction of the DFC90 digital autopilot and the IFD4/5xx series of GPS navigators.
Use
Avidyne Entegra systems are found in aircraft from such companies as:
Cirrus Aircraft
Columbia/Lancair Aircraft
Piper Aircraft
Spectrum Aeronautical
Extra Aircraft
Competition
The Avidyne Entegra competes with the Garmin G1000 and Chelton FlightLogic EFIS glass cockpits. However, there are significant differences with regard to the features, degree of integration, intuitive aspects of the design, and overall product utility. Note that the Chelton system is not typically found in airplanes that include the less expensive G1000 or Avidyne systems. Other competitors are ASPEN and DYNON.
External links
Avidyne Corporation
Avidyne Entegra
Release9.com
Plastic Pilot
References
Avionics
Aircraft instruments
Glass cockpit | Avidyne Entegra | [
"Technology",
"Engineering"
] | 1,135 | [
"Glass cockpit",
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
11,131,291 | https://en.wikipedia.org/wiki/Extra%20element%20theorem | The Extra Element Theorem (EET) is an analytic technique developed by R. D. Middlebrook for simplifying the process of deriving driving point and transfer functions for linear electronic circuits. Much like Thévenin's theorem, the extra element theorem breaks down one complicated problem into several simpler ones.
Driving point and transfer functions can generally be found using Kirchhoff's circuit laws. However, several complicated equations may result that offer little insight into the circuit's behavior. Using the extra element theorem, a circuit element (such as a resistor) can be removed from a circuit, and the desired driving point or transfer function is found. By removing the element that most complicate the circuit (such as an element that creates feedback), the desired function can be easier to obtain. Next, two correctional factors must be found and combined with the previously derived function to find the exact expression.
The general form of the extra element theorem is called the N-extra element theorem and allows multiple circuit elements to be removed at once.
General formulation
The (single) extra element theorem expresses any transfer function as a product of the transfer function with that element removed and a correction factor. The correction factor term consists of the impedance of the extra element and two driving point impedances seen by the extra element: The double null injection driving point impedance and the single injection driving point impedance. Because an extra element can be removed in general by either short-circuiting or open-circuiting the element, there are two equivalent forms of the EET:
or,
Where the Laplace-domain transfer functions and impedances in the above expressions are defined as follows: is the transfer function with the extra element present. is the transfer function with the extra element open-circuited. is the transfer function with the extra element short-circuited. is the impedance of the extra element. is the single-injection driving point impedance "seen" by the extra element. is the double-null-injection driving point impedance "seen" by the extra element.
The extra element theorem incidentally proves that any electric circuit transfer function can be expressed as no more than a bilinear function of any particular circuit element.
Driving point impedances
Single Injection Driving Point Impedance
is found by making the input to the system's transfer function zero (short circuit a voltage source or open circuit a current source) and determining the impedance across the terminals to which the extra element will be connected with the extra element absent. This impedance is same as the Thévenin's equivalent impedance.
Double Null Injection Driving Point Impedance
is found by replacing the extra element with a second test signal source (either a current source or voltage source as appropriate). Then, is defined as the ratio of voltage across the terminals of this second test source to the current leaving its positive terminal when the output of the system's transfer function is nulled for any value of the primary input to the system's transfer function.
In practice, can be found from working backward from the facts that the output of the transfer function is made zero and that the primary input to the transfer function is unknown. Then using conventional circuit analysis techniques to express both the voltage across the extra element test source's terminals, , and the current leaving the extra element test source's positive terminals, , and calculating . Although the computation of is an unfamiliar process for many engineers, its expressions are often much simpler than those for because the nulling of the transfer function's output often leads to other voltages/currents in the circuit being zero, which may allow exclusion of certain components from analysis.
Special case with transfer function as a self-impedance
As a special case, the EET can be used to find the input impedance of a network with the addition of an element designated as "extra". In this case, is the same as the impedance of the input test current source signal made zero or equivalently with the input open circuited. Likewise, since the transfer function output signal can be considered to be the voltage at the input terminals, is found when the input voltage is zero i.e. the input terminals are short-circuited. Thus, for this particular application, the EET can be written as:
where
is the impedance chosen as the extra element
is the input impedance with Z removed (or made infinite)
is the impedance seen by the extra element Z with the input shorted (or made zero)
is the impedance seen by the extra element Z with the input open (or made infinite)
Computing these three terms may seem like extra effort, but they are often easier to compute than the overall input impedance.
Example
Consider the problem of finding for the circuit in Figure 1 using the EET (note all component values are unity for simplicity). If the capacitor (gray shading) is denoted the extra element then
Removing this capacitor from the circuit,
Calculating the impedance seen by the capacitor with the input shorted,
Calculating the impedance seen by the capacitor with the input open,
Therefore, using the EET,
This problem was solved by calculating three simple driving point impedances by inspection.
Feedback amplifiers
The EET is also useful for analyzing single and multi-loop feedback amplifiers. In this case, the EET can take the form of the asymptotic gain model.
See also
Asymptotic gain model
Blackman's theorem
Return ratio
Signal-flow graph
Further reading
Christophe Basso Linear Circuit Transfer Functions: An Introduction to Fast Analytical Techniques first edition, Wiley, IEEE Press, 2016, 978-1119236375
References
External links
Examples of applying the EET
Derivation and examples
Fast Analytical Techniques at Work in Small-Signal Modeling
Circuit theorems | Extra element theorem | [
"Physics"
] | 1,179 | [
"Equations of physics",
"Circuit theorems",
"Physics theorems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.