text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The Poynting–Robertson effect , also known as Poynting–Robertson drag , named after John Henry Poynting and Howard P. Robertson , is a process by which solar radiation causes a dust grain orbiting a star to lose angular momentum relative to its orbit around the star. This is related to radiation pressure tangential to the grain's motion.
This causes dust that is small enough to be affected by this drag, but too large to be blown away from the star by radiation pressure, to spiral slowly into the star. In the Solar System , this affects dust grains from about 1 μm to 1 mm in diameter. Larger dust is likely to collide with another object long before such drag can have an effect.
Poynting initially gave a description of the effect in 1903 based on the luminiferous aether theory, which was superseded by the theories of relativity in 1905–1915. In 1937 Robertson described the effect in terms of general relativity .
Robertson considered dust motion in a beam of radiation emanating from a point source. A. W. Guess later considered the problem for a spherical source of radiation and found that for particles far from the source the resultant forces are in agreement with those concluded by Poynting. [ 1 ]
The effect can be understood in two ways, depending on the reference frame chosen.
From the perspective of the grain of dust circling a star (panel (a) of the figure), the star's radiation appears to be coming from a slightly forward direction ( aberration of light ). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. The angle of aberration is extremely small since the radiation is moving at the speed of light while the dust grain is moving many orders of magnitude slower than that.
From the perspective of the star (panel (b) of the figure), the dust grain absorbs sunlight entirely in a radial direction, thus the grain's angular momentum is not affected by it. But the re-emission of photons, which is isotropic in the frame of the grain (a), is no longer isotropic in the frame of the star (b). This anisotropic emission causes the photons to carry away angular momentum from the dust grain.
The Poynting–Robertson drag acts in the opposite direction to the dust grain's orbital motion, leading to a drop in the grain's angular momentum. While the dust grain thus spirals slowly into the star, its orbital speed increases continuously.
The Poynting–Robertson force is equal to
where v is the grain's velocity, c is the speed of light , W is the power of the incoming radiation, r the grain's radius, G is the universal gravitational constant , M ☉ the Sun 's mass, L ☉ is the solar luminosity, and R the grain's orbital radius.
The Poynting–Robertson effect is more pronounced for smaller objects. Gravitational force varies with mass, which is ∝ r 3 {\displaystyle \propto r^{3}} (where r {\displaystyle r} is the radius of the dust), while the power it receives and radiates varies with surface area ( ∝ r 2 {\displaystyle \propto r^{2}} ). So for large objects the effect is negligible.
The effect is also stronger closer to the Sun. Gravity varies as R − 2 {\displaystyle R^{-2}} (where R is the radius of the orbit), whereas the Poynting–Robertson force varies as R − 2.5 {\displaystyle R^{-2.5}} , so the effect also gets relatively stronger as the object approaches the Sun. This tends to reduce the eccentricity of the object's orbit in addition to dragging it in.
In addition, as the size of the particle increases, the surface temperature is no longer approximately constant, and the radiation pressure is no longer isotropic in the particle's reference frame. If the particle rotates slowly, the radiation pressure may contribute to the change in angular momentum, either positively or negatively.
Radiation pressure affects the effective force of gravity on the particle: it is felt more strongly by smaller particles, and blows very small particles away from the Sun. It is characterized by the dimensionless dust parameter β {\displaystyle \beta } , the ratio of the force due to radiation pressure to the force of gravity on the particle:
where Q PR {\displaystyle Q_{\text{PR}}} is the Mie scattering coefficient, ρ {\displaystyle \rho } is the density, and s {\displaystyle s} is the size (the radius) of the dust grain. [ 2 ]
Particles with β ≥ 0.5 {\displaystyle \beta \geq 0.5} have radiation pressure at least half as strong as gravity and will pass out of the Solar System on hyperbolic orbits if their initial velocities were Keplerian. [ 3 ] For rocky dust particles, this corresponds to a diameter of less than 1 μm . [ 4 ]
Particles with 0.1 < β < 0.5 {\displaystyle 0.1<\beta <0.5} may spiral inwards or outwards, depending on their size and initial velocity vector; they tend to stay in eccentric orbits.
Particles with β ≈ 0.1 {\displaystyle \beta \approx 0.1} take around 10,000 years to spiral into the Sun from a circular orbit at 1 AU . In this regime, inspiraling time and particle diameter are both roughly ∝ 1 / β {\displaystyle \propto 1/\beta } . [ 5 ]
If the initial grain velocity was not Keplerian, then circular or any confined orbit is possible for β < 1 {\displaystyle \beta <1} .
It has been theorized that the slowing down of the rotation of Sun's outer layer may be caused by a similar effect. [ 6 ] [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Poynting–Robertson_effect |
The pozzolanic activity is a measure for the degree of reaction over time or the reaction rate between a pozzolan and Ca 2+ or calcium hydroxide (Ca(OH) 2 ) in the presence of water. The rate of the pozzolanic reaction is dependent on the intrinsic characteristics of the pozzolan such as the specific surface area , the chemical composition and the active phase content.
Physical surface adsorption is not considered as being part of the pozzolanic activity, because no irreversible molecular bonds are formed in the process. [ 1 ]
The pozzolanic reaction is the chemical reaction that occurs in portland cement upon the addition of pozzolans. It is the main reaction involved in the Roman concrete invented in Ancient Rome and used to build, for example, the Pantheon . The pozzolanic reaction converts a silica-rich precursor with no cementing properties, to a calcium silicate, with good cementing properties.
In chemical terms, the pozzolanic reaction occurs between calcium hydroxide , also known as portlandite (Ca(OH) 2 ), and silicic acid (written as H 4 SiO 4 , or Si(OH) 4 , in the geochemical notation):
or summarized in abbreviated cement chemist notation :
The pozzolanic reaction can also be written in an ancient industrial silicate notations as:
or even directly:
Both notations still coexist in the literature, depending on the research field considered. However, the more recent geochemical notation in which the Si atom is tetracoordinated by four hydroxyl groups ( Si(OH) 4 , also commonly noted H 4 SiO 4 ) is more correct than the ancient industrial silicate notation for which silicic acid ( H 2 SiO 3 ) was represented in the same way as carbonic acid ( H 2 CO 3 ) whose geometrical configuration is trigonal planar. When only considering mass balance, they are equivalent and both are used.
The product CaH 2 SiO 4 ·2 H 2 O is a calcium silicate hydrate , also abbreviated as C-S-H in cement chemist notation , the hyphenation denotes the variable stoichiometry. The atomic (or molar) ratio Ca/Si, CaO/SiO 2 , or C/S, and the number of water molecules can vary and the above-mentioned stoichiometry may differ.
Many pozzolans may also contain aluminate , or Al(OH) 4 − , that will react with calcium hydroxide and water to form calcium aluminate hydrates such as C 4 AH 13 , C 3 AH 6 or hydrogarnet , or in combination with silica C 2 ASH 8 or strätlingite ( cement chemist notation ). In the presence of anionic groups such as sulfate, carbonate or chloride, AFm phases and AFt or ettringite phases can form.
Pozzolanic reaction is a long term reaction, which involves dissolved silicic acid, water and CaO or Ca(OH) 2 or other pozzolans to form a strong cementation matrix. This process is often irreversible. Sufficient amount of free calcium ion and a high pH of 12 and above is needed to initiate and maintain the pozzolanic reaction. [ 2 ] This is because at a pH of around 12, the solubility of silicon and aluminium ions is high enough to support the pozzolanic reaction.
Prolonged grinding [ clarification needed ] results in an increased pozzolanic activity by creating a larger specific surface area available for reaction. Moreover, grinding also creates crystallographic defects at and below the particle surface. The dissolution rate of the strained or partially disconnected silicate moieties is strongly enhanced. Even materials which are commonly not regarded to behave as a pozzolan , such as quartz , can become reactive once ground below a certain critical particle diameter. [ 3 ]
The overall chemical composition of a pozzolan is considered as one of the parameters governing long-term performance (e.g. compressive strength) of the blended cement binder, ASTM C618 prescribes that a pozzolan should contain SiO 2 + Al 2 O 3 + Fe 2 O 3 ≥ 70 wt.%. In case of a (quasi) one phase material such as blast-furnace slags the overall chemical composition can be considered as meaningful parameter, for multi-phase materials only a correlation between the pozzolanic activity and the chemistry of the active phases can be sought. [ 4 ]
Many pozzolans consist of a heterogeneous mixture of phases of different pozzolanic activity. Obviously, the content in reactive phases is an important property determining the overall reactivity. In general, the pozzolanic activity of phases thermodynamically stable at ambient conditions is low when compared to on an equal specific surface basis to less thermodynamically stable phase assemblages. Volcanic ash deposits containing large amounts of volcanic glass or zeolites are more reactive than quartz sands or detrital clay minerals. In this respect, the thermodynamic driving force behind the pozzolanic reaction serves as a rough indicator of the potential reactivity of a (alumino-)silicate material. Similarly, materials showing structural disorder such as glasses show higher pozzolanic activities than crystalline ordered compounds. [ 5 ]
The rate of the pozzolanic reaction can also be controlled by external factors such as the mix proportions, the amount of water or space available for the formation and growth of hydration products and the temperature of reaction. Therefore, typical blended cement mix design properties such as the replacement ratio of pozzolan for Portland cement, the water to binder ratio and the curing conditions strongly affect the reactivity of the added pozzolan.
Mechanical evaluation of the pozzolanic activity is based upon a comparison of the compressive strength of mortar bars containing pozzolans as a partial replacement for Portland cement to reference mortar bars containing only Portland cement as binder. The mortar bars are prepared, cast, cured and tested following a detailed set of prescriptions. Compressive strength testing is carried out at fixed moments, typically 3, 7, and 28 days after mortar preparation. A material is considered pozzolanically active when it contributes to the compressive strength, taking into account the effect of dilution. Most national and international technical standards or norms include variations of this methodology.
A pozzolanic material is by definition capable of binding calcium hydroxide in the presence of water. Therefore, the chemical measurement of this pozzolanic activity represents a way of evaluating pozzolanic materials. This can be done by directly measuring the amount of calcium hydroxide a pozzolan consumes over time. At high water to binder ratio (suspended solutions), this can be measured by titrimety or by spectroscopic techniques . At lower water to binder ratios (pastes), thermal analysis or X-ray powder diffraction techniques are commonly used to determine remaining calcium hydroxide contents. Other direct methods have been developed that aim to directly measure the degree of reaction of the pozzolan itself. Here, selective dissolutions, X-ray powder diffraction or scanning electron microscopy image analysis methods have been used.
Indirect methods comprise on the one hand methods that investigate which material properties are responsible for the pozzolan's reactivity with portlandite. Material properties of interest are the (re)active silica and alumina content, the specific surface area and/or the reactive mineral and amorphous phases of the pozzolanic material. Other methods indirectly determine the extent of the pozzolanic activity by measuring an indicative physical property of the reacting system. Measurements of the electrical conductivity , chemical shrinkage of the pastes or the heat evolution by heat flow calorimetry reside in the latter category. | https://en.wikipedia.org/wiki/Pozzolanic_reaction |
In general relativity , the pp-wave spacetimes , or pp-waves for short, are an important family of exact solutions of Einstein's field equation . The term pp stands for plane-fronted waves with parallel propagation , and was introduced in 1962 by Jürgen Ehlers and Wolfgang Kundt .
The pp-waves solutions model radiation moving at the speed of light . This radiation may consist of:
or any combination of these, so long as the radiation is all moving in the same direction.
A special type of pp-wave spacetime, the plane wave spacetimes , provide the most general analogue in general relativity of the plane waves familiar to students of electromagnetism .
In particular, in general relativity, we must take into account the gravitational effects of the energy density of the electromagnetic field itself. When we do this, purely electromagnetic plane waves provide the direct generalization of ordinary plane wave solutions in Maxwell's theory .
Furthermore, in general relativity, disturbances in the gravitational field itself can propagate, at the speed of light, as "wrinkles" in the curvature of spacetime. Such gravitational radiation is the gravitational field analogue of electromagnetic radiation.
In general relativity, the gravitational analogue of electromagnetic plane waves are precisely the vacuum solutions among the plane wave spacetimes.
They are called gravitational plane waves .
There are physically important examples of pp-wave spacetimes which are not plane wave spacetimes.
In particular, the physical experience of an observer who whizzes by a gravitating object (such as a star or a black hole) at nearly the speed of light can be modelled by an impulsive pp-wave spacetime called the Aichelburg–Sexl ultraboost .
The gravitational field of a beam of light is modelled, in general relativity, by a certain axi-symmetric pp-wave.
An example of pp-wave given when gravity is in presence of matter is the gravitational field surrounding a neutral Weyl fermion: the system consists in a gravitational field that is a pp-wave, no electrodynamic radiation, and a massless spinor exhibiting axial symmetry. In the Weyl-Lewis-Papapetrou spacetime, there exists a complete set of exact solutions for both gravity and matter. [ 1 ]
Pp-waves were introduced by Hans Brinkmann in 1925 and have been rediscovered many times since, most notably by Albert Einstein and Nathan Rosen in 1937. More research is indeed on its way.
A pp-wave spacetime is any Lorentzian manifold whose metric tensor can be described, with respect to Brinkmann coordinates , in the form
where H {\displaystyle H} is any smooth function . This was the original definition of Brinkmann, and it has the virtue of being easy to understand.
The definition which is now standard in the literature is more sophisticated.
It makes no reference to any coordinate chart, so it is a coordinate-free definition.
It states that any Lorentzian manifold which admits a covariantly constant null vector field k {\displaystyle k} is called a pp-wave spacetime. That is, the covariant derivative of k {\displaystyle k} must vanish identically:
This definition was introduced by Ehlers and Kundt in 1962. To relate Brinkmann's definition to this one, take k = ∂ v {\displaystyle k=\partial _{v}} , the coordinate vector orthogonal to the hypersurfaces v = v 0 {\displaystyle v=v_{0}} . In the index-gymnastics notation for tensor equations, the condition on k {\displaystyle k} can be written k a ; b = 0 {\displaystyle k_{a;b}=0} .
Neither of these definitions make any mention of any field equation; in fact, they are entirely independent of physics . The vacuum Einstein equations are very simple for pp waves, and in fact linear: the metric d s 2 = H ( u , x , y ) d u 2 + 2 d u d v + d x 2 + d y 2 {\displaystyle ds^{2}=H(u,x,y)\,du^{2}+2\,du\,dv+dx^{2}+dy^{2}} obeys these equations if and only if H x x + H y y = 0 {\displaystyle H_{xx}+H_{yy}=0} . But the definition of a pp-wave spacetime does not impose this equation, so it is entirely mathematical and belongs to the study of pseudo-Riemannian geometry . In the next section we turn to physical interpretations of pp-wave spacetimes.
Ehlers and Kundt gave several more coordinate-free characterizations, including:
It is a purely mathematical fact that the characteristic polynomial of the Einstein tensor of any pp-wave spacetime vanishes identically. Equivalently, we can find a Newman–Penrose complex null tetrad such that the Ricci-NP scalars Φ i j {\displaystyle \Phi _{ij}} (describing any matter or nongravitational fields which may be present in a spacetime) and the Weyl-NP scalars Ψ i {\displaystyle \Psi _{i}} (describing any gravitational field which may be present) each have only one nonvanishing component.
Specifically, with respect to the NP tetrad
the only nonvanishing component of the Ricci spinor is
and the only nonvanishing component of the Weyl spinor is
This means that any pp-wave spacetime can be interpreted, in the context of general relativity,
as a null dust solution . Also, the Weyl tensor always has Petrov type N as may be verified by using the Bel criteria .
In other words, pp-waves model various kinds of classical and massless radiation traveling at the local speed of light . This radiation can be gravitational, electromagnetic, Weyl fermions, or some hypothetical kind of massless radiation other than these three, or any combination of these. All this radiation is traveling in the same direction, and the null vector k = ∂ v {\displaystyle k=\partial _{v}} plays the role of a wave vector .
Unfortunately, the terminology concerning pp-waves, while fairly standard, is highly confusing and tends to promote misunderstanding.
In any pp-wave spacetime, the covariantly constant vector field k {\displaystyle k} always has identically vanishing optical scalars . Therefore, pp-waves belong to the Kundt class (the class of Lorentzian manifolds admitting a null congruence with vanishing optical scalars).
Going in the other direction, pp-waves include several important special cases.
From the form of Ricci spinor given in the preceding section, it is immediately apparent that a pp-wave spacetime (written in the Brinkmann chart) is a vacuum solution if and only if H {\displaystyle H} is a harmonic function (with respect to the spatial coordinates x , y {\displaystyle x,y} ). Physically, these represent purely gravitational radiation propagating along the null rays ∂ v {\displaystyle \partial _{v}} .
Ehlers and Kundt and Sippel and Gönner have classified vacuum pp-wave spacetimes by their autometry group , or group of self-isometries . This is always a Lie group , and as usual it is easier to classify the underlying Lie algebras of Killing vector fields . It turns out that the most general pp-wave spacetime has only one Killing vector field, the null geodesic congruence k = ∂ v {\displaystyle k=\partial _{v}} . However, for various special forms of H {\displaystyle H} , there are additional Killing vector fields.
The most important class of particularly symmetric pp-waves are the plane wave spacetimes , which were first studied by Baldwin and Jeffery.
A plane wave is a pp-wave in which H {\displaystyle H} is quadratic, and can hence be transformed to the simple form
Here, a , b , c {\displaystyle a,b,c} are arbitrary smooth functions of u {\displaystyle u} .
Physically speaking, a , b {\displaystyle a,b} describe the wave profiles of the two linearly independent polarization modes of gravitational radiation which may be present, while c {\displaystyle c} describes the wave profile of any nongravitational radiation.
If c = 0 {\displaystyle c=0} , we have the vacuum plane waves, which are often called plane gravitational waves .
Equivalently, a plane-wave is a pp-wave with at least a five-dimensional Lie algebra of Killing vector fields X {\displaystyle X} , including X = ∂ v {\displaystyle X=\partial _{v}} and four more which have the form
where
Intuitively, the distinction is that the wavefronts of plane waves are truly planar ; all points on a given two-dimensional wavefront are equivalent. This not quite true for more general pp-waves.
Plane waves are important for many reasons; to mention just one, they are essential for the beautiful topic of colliding plane waves .
A more general subclass consists of the axisymmetric pp-waves , which in general have a two-dimensional Abelian Lie algebra of Killing vector fields.
These are also called SG2 plane waves , because they are the second type in the symmetry classification of Sippel and Gönner.
A limiting case of certain axisymmetric pp-waves yields the Aichelburg/Sexl ultraboost modeling an ultrarelativistic encounter with an isolated spherically symmetric object.
(See also the article on plane wave spacetimes for a discussion of physically important special cases of plane waves.)
J. D. Steele has introduced the notion of generalised pp-wave spacetimes .
These are nonflat Lorentzian spacetimes which admit a self-dual covariantly constant null bivector field.
The name is potentially misleading, since as Steele points out, these are nominally a special case of nonflat pp-waves in the sense defined above. They are only a generalization in the sense that although the Brinkmann metric form is preserved, they are not necessarily the vacuum solutions studied by Ehlers and Kundt, Sippel and Gönner, etc.
Another important special class of pp-waves are the sandwich waves . These have vanishing curvature except on some range u 1 < u < u 2 {\displaystyle u_{1}<u<u_{2}} , and represent a gravitational wave moving through a Minkowski spacetime background.
Since they constitute a very simple and natural class of Lorentzian manifolds, defined in terms of a null congruence, it is not very surprising that they are also important in other relativistic classical field theories of gravitation . In particular, pp-waves are exact solutions in the Brans–Dicke theory ,
various higher curvature theories and Kaluza–Klein theories , and certain gravitation theories of J. W. Moffat .
Indeed, B. O. J. Tupper has shown that the common vacuum solutions in general relativity and in the Brans/Dicke theory are precisely the vacuum pp-waves (but the Brans/Dicke theory admits further wavelike solutions). Hans-Jürgen Schmidt has reformulated the theory of (four-dimensional) pp-waves in terms of a two-dimensional metric-dilaton theory of gravity.
Pp-waves also play an important role in the search for quantum gravity , because as Gary Gibbons has pointed out, all loop term quantum corrections vanish identically for any pp-wave spacetime. This means that studying tree-level quantizations of pp-wave spacetimes offers a glimpse into the yet unknown world of quantum gravity.
It is natural to generalize pp-waves to higher dimensions, where they enjoy similar properties to those we have discussed. C. M. Hull has shown that such higher-dimensional pp-waves are essential building blocks for eleven-dimensional supergravity .
PP-waves enjoy numerous striking properties. Some of their more abstract mathematical properties have already been mentioned. In this section a few additional properties are presented.
Consider an inertial observer in Minkowski spacetime who encounters a sandwich plane wave. Such an observer will experience some interesting optical effects. If he looks into the oncoming wavefronts at distant galaxies which have already encountered the wave, he will see their images undistorted. This must be the case, since he cannot know the wave is coming until it reaches his location, for it is traveling at the speed of light. However, this can be confirmed by direct computation of the optical scalars of the null congruence ∂ v {\displaystyle \partial _{v}} . Now suppose that after the wave passes, our observer turns about face and looks through the departing wavefronts at distant galaxies which the wave has not yet reached. Now he sees their optical images sheared and magnified (or demagnified) in a time-dependent manner. If the wave happens to be a polarized gravitational plane wave , he will see circular images alternately squeezed horizontally while expanded vertically, and squeezed vertically while expanded horizontally. This directly exhibits the characteristic effect of a gravitational wave in general relativity on light.
The effect of a passing polarized gravitational plane wave on the relative positions of a cloud of (initially static) test particles will be qualitatively very similar. We might mention here that in general, the motion of test particles in pp-wave spacetimes can exhibit chaos .
The fact that Einstein's field equation is nonlinear is well known. This implies that if you have two exact solutions, there is almost never any way to linearly superimpose them. PP waves provide a rare exception to this rule:
if you have two PP waves sharing the same covariantly constant null vector (the same geodesic null congruence, i.e. the same wave vector field), with metric functions H 1 , H 2 {\displaystyle H_{1},H_{2}} respectively, then H 1 + H 2 {\displaystyle H_{1}+H_{2}} gives a third exact solution.
Roger Penrose has observed that near a null geodesic, every Lorentzian spacetime looks like a plane wave . To show this, he used techniques imported from algebraic geometry to "blow up" the spacetime so that the given null geodesic becomes the covariantly constant null geodesic congruence of a plane wave. This construction is called a Penrose limit .
Penrose also pointed out that in a pp-wave spacetime, all the polynomial scalar invariants of the Riemann tensor vanish identically , yet the curvature is almost never zero. This is because in four-dimension all pp-waves belong to the class of VSI spacetimes . Such statement does not hold in higher-dimensions since there are higher-dimensional pp-waves of algebraic type II with non-vanishing polynomial scalar invariants . If you view the Riemann tensor as a second rank tensor acting on bivectors, the vanishing of invariants is analogous to the fact that a nonzero null vector has vanishing squared length.
Penrose was also the first to understand the strange nature of causality in pp-sandwich wave spacetimes. He showed that some or all of the null geodesics emitted at a given event will be refocused at a later event (or string of events). The details depend upon whether the wave is purely gravitational, purely electromagnetic, or neither.
Every pp-wave admits many different Brinkmann charts. These are related by coordinate transformations , which in this context may be considered to be gauge transformations . In the case of plane waves, these gauge transformations allow us to always regard two colliding plane waves to have parallel wavefronts , and thus the waves can be said to collide head-on .
This is an exact result in fully nonlinear general relativity which is analogous to a similar result concerning electromagnetic plane waves as treated in special relativity .
There are many noteworthy explicit examples of pp-waves.
("Explicit" means that the metric functions can be written down in terms of elementary functions or perhaps well-known special functions such as Mathieu functions .)
Explicit examples of axisymmetric pp-waves include
Explicit examples of plane wave spacetimes include
43–54. | https://en.wikipedia.org/wiki/Pp-wave_spacetime |
1XB7 , 3B1M , 3CS8 , 3D24 , 3U9Q , 3V9T , 3V9V , 4QJR , 4QK4
10891
19017
ENSG00000109819
ENSMUSG00000029167
Q9UBK2
O70343
NM_013261 NM_001330751 NM_001330752 NM_001330753
NM_008904
NP_001341755 NP_001341756 NP_001341757
NP_001389920
Peroxisome proliferator-activated receptor gamma coactivator 1-alpha ( PGC-1α ) is a protein that in humans is encoded by the PPARGC1A gene . [ 4 ] PPARGC1A is also known as human accelerated region 20 ( HAR20 ). It may, therefore, have played a key role in differentiating humans from apes. [ 5 ]
PGC-1α is the master regulator of mitochondrial biogenesis . [ 6 ] [ 7 ] [ 8 ] PGC-1α is also the primary regulator of liver gluconeogenesis , inducing increased gene expression for gluconeogenesis. [ 9 ]
PGC-1α is a gene that contains two promoters, and has 4 alternative splicings. PGC-1α is a transcriptional coactivator that regulates the genes involved in energy metabolism . It is the master regulator of mitochondrial biogenesis . [ 6 ] [ 7 ] [ 8 ] This protein interacts with the nuclear receptor PPAR-γ , which permits the interaction of this protein with multiple transcription factors . This protein can interact with, and regulate the activity of, cAMP response element-binding protein ( CREB ) and nuclear respiratory factors (NRFs) [ citation needed ] . PGC-1α provides a direct link between external physiological stimuli and the regulation of mitochondrial biogenesis, and is a major factor causing slow-twitch rather than fast-twitch muscle fiber types . [ 10 ]
Endurance exercise has been shown to activate the PGC-1α gene in human skeletal muscle. [ 11 ] Exercise-induced PGC-1α in skeletal muscle increases autophagy [ 12 ] [ 13 ] and unfolded protein response . [ 14 ]
PGC-1α protein may also be involved in controlling blood pressure , regulating cellular cholesterol homeostasis , and the development of obesity . [ 15 ]
PGC-1α is thought to be a master integrator of external signals. It is known to be activated by a host of factors, including:
PGC-1α has been shown to exert positive feedback circuits on some of its upstream regulators:
Akt and calcineurin are both activators of NF-kappa-B (p65). [ 23 ] [ 24 ] Through their activation, PGC-1α seems to activate NF-kappa-B. Increased activity of NF-kappa-B in muscle has recently been demonstrated following induction of PGC-1α. [ 25 ] The finding seems to be controversial. Other groups found that PGC-1s inhibit NF-kappa-B activity. [ 26 ] The effect was demonstrated for PGC-1 alpha and beta.
PGC-1α has also been shown to drive NAD biosynthesis to play a large role in renal protection in acute kidney injury . [ 27 ]
PPARGC1A has been implicated as a potential therapy for Parkinson's disease conferring protective effects on mitochondrial metabolism. [ 28 ]
Moreover, brain-specific isoforms of PGC-1alpha have recently been identified which are likely to play a role in other neurodegenerative disorders such as Huntington's disease and amyotrophic lateral sclerosis . [ 29 ] [ 30 ]
Massage therapy appears to increase the amount of PGC-1α, which leads to the production of new mitochondria. [ 31 ] [ 32 ] [ 33 ]
PGC-1α and beta has furthermore been implicated in polarization to anti-inflammatory M2 macrophages by interaction with PPAR-γ [ 34 ] with upstream activation of STAT6 . [ 35 ] An independent study confirmed the effect of PGC-1 on polarisation of macrophages towards M2 via STAT6/PPAR gamma and furthermore demonstrated that PGC-1 inhibits proinflammatory cytokine production. [ 36 ]
PGC-1α has been recently proposed to be responsible for β-aminoisobutyric acid secretion by exercising muscles. [ 37 ] The effect of β-aminoisobutyric acid in white fat includes the activation of thermogenic genes that prompt the browning of white adipose tissue and the consequent increase of background metabolism. Hence, the β-aminoisobutyric acid could act as a messenger molecule of PGC-1α and explain the effects of PGC-1α increase in other tissues such as white fat.
PGC-1α increases BNP expression by coactivating Estrogen-related receptor alpha (ERRα) and / or AP1. Subsequently, BNP induces a chemokine cocktail in muscle fibers and activates macrophages in a local paracrine manner, which can then contribute to enhancing the repair and regeneration potential of trained muscles.
Most studies reporting effects of PGC-1α on physiological functions have used mouse models in which the PGC-1α gene is either knocked out or overexpressed from conception. However, some of the proposed effects of PGC-1α have been questioned by studies using inducible knockout technology to remove the PGC-1α gene only in adult mice. For example, two independent studies have shown that adult expression of PGC-1α is not required for improved mitochondrial function after exercise training. [ 38 ] [ 39 ] This suggests that some of the reported effects of PGC-1α are likely to occur only in the developmental stage.
In the metabolic disorder of combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, there is a massively increased expression of PGC-1α, which is consistent with upregulated beta oxidation . [ 40 ]
PPARGC1A has been shown to interact with:
ERRα and PGC-1α are coactivators of both glucokinase (GK) and SIRT3 , binding to an ERRE element in the GK and SIRT3 promoters. [ citation needed ]
This article incorporates text from the United States National Library of Medicine , which is in the public domain . | https://en.wikipedia.org/wiki/Pparg_coactivator_1_alpha |
Praseodymium(III) oxide , praseodymium oxide or praseodymia is the chemical compound composed of praseodymium and oxygen with the formula Pr 2 O 3 . It forms light green hexagonal crystals. [ 1 ] Praseodymium(III) oxide crystallizes in the manganese(III) oxide or bixbyite structure. [ 2 ]
Praseodymium(III) oxide can be used as a dielectric in combination with silicon . [ 2 ] Praseodymium-doped glass, called didymium glass , turns yellow and is used in welding goggles because it blocks infrared radiation. Praseodymium(III) oxide is also used to color glass and ceramics yellow. [ 3 ] For coloring ceramics, also the very dark brown mixed-valence compound praseodymium(III,IV) oxide, Pr 6 O 11 , is used. | https://en.wikipedia.org/wiki/Pr2O3 |
Praseodymium tetraboride is a binary inorganic compound of praseodymium and boron with the chemical formula PrB 4 .
Praseodymium tetraboride can be prepared by directly reacting the elements at 2350 °C:
Praseodymium tetraboride forms crystals of tetragonal system, space group P4/mbm , cell parameters a = 0.7242 nm, c = 0.4119 nm, Z = 4, structure like thorium tetraboride . [ 1 ] [ 2 ]
The compound is formed by a peritectic reaction and melts at 2350 °C. [ 1 ]
At a temperature of 19.5 K, the compound undergoes a transition to an antiferromagnetic state, and at a temperature of 15.9 K, to a ferromagnetic state. [ 3 ] | https://en.wikipedia.org/wiki/PrB4 |
Praseodymium hexaboride is a binary inorganic compound of praseodymium and boron with the formula PrB 6 . It forms black crystals that are insoluble in water.
Praseodymium hexaboride can be prepared from the reaction of stoichiometric quantities of praseodymium and boron:
Praseodymium hexaboride forms black crystals of the cubic crystal system , with space group Pm 3 m , cell parameters a = 0.4129 nm, Z = 1, and structure isotypical with calcium hexaboride . [ 2 ] The compound melts congruently at 2610 °C. [ 3 ] At temperatures below 7 K, a magnetic transition to an antiferromagnetic state occurs in the compound. [ 4 ] [ 5 ] [ 6 ] It does not dissolve in water.
Praseodymium hexaboride is used as a component of alloys for cathodes of high-power electronic devices. | https://en.wikipedia.org/wiki/PrB6 |
Praseodymium(III) chloride is the inorganic compound with the formula Pr Cl 3 . Like other lanthanide trichlorides , it exists both in the anhydrous and hydrated forms. It is a blue-green solid that rapidly absorbs water on exposure to moist air to form a light green hepta hydrate .
Praseodymium(III) chloride is prepared by treating praseodymium metal with hydrogen chloride : [ 1 ] [ 2 ]
It is usually purified by vacuum sublimation. [ 3 ]
Hydrated salts of praseodymium(III) chloride can be prepared by treatment of either praseodymium metal or praseodymium(III) carbonate with hydrochloric acid :
PrCl 3 ∙7H 2 O is a hygroscopic substance, that will not crystallize from the mother liquor unless it is left to dry in a desiccator. Anhydrous PrCl 3 can be made by thermal dehydration of the hydrate at 400 °C in the presence of ammonium chloride , the so-called ammonium chloride route . [ 3 ] [ 4 ] [ 5 ] Alternatively the hydrate can be dehydrated using thionyl chloride . [ 3 ] [ 6 ]
Praseodymium(III) chloride is Lewis acidic , classified as "hard" according to the HSAB concept . Rapid heating of the hydrate may cause small amounts of hydrolysis . [ 3 ] PrCl 3 forms a stable Lewis acid-base complex K 2 PrCl 5 by reaction with potassium chloride ; this compound shows interesting optical and magnetic properties. [ 1 ]
Aqueous solutions of praseodymium(III) chloride can be used to prepare insoluble praseodymium(III) compounds. For example, praseodymium(III) phosphate and praseodymium(III) fluoride can be prepared by reaction with potassium phosphate and sodium fluoride , respectively:
When heated with alkali metal chlorides, it forms a series of ternary (compounds containing three different elements) materials with the formulae MPr 2 Cl 7 , M 3 PrCl 6 , M 2 PrCl 5 , and M 3 Pr 2 Cl 9 where M = K, Rb, Cs. [ 7 ] | https://en.wikipedia.org/wiki/PrCl3 |
Praseodymium(III) fluoride is an inorganic compound with the formula PrF 3 , being the most stable fluoride of praseodymium .
The reaction between praseodymium(III) nitrate and sodium fluoride will obtain praseodymium(III) fluoride as a green crystalline solid: [ 3 ]
There are also literature reports on the reaction between chlorine trifluoride and various oxides of praseodymium (Pr 2 O 3 , Pr 6 O 11 and PrO 2 ), where praseodymium(III) fluoride is the only product. The reaction between bromine trifluoride and praseodymium oxide left in the air for a period of time also produces praseodymium(III) fluoride, but the reaction is incomplete; the reaction between praseodymium(III) oxalate hydrate and bromine trifluoride can obtain praseodymium(III) fluoride, and carbon is also produced from this reaction. [ 4 ] Praseodymium(III) fluoride can also be obtained by reacting praseodymium oxide and sulfur hexafluoride at 584 °C. [ 5 ]
Praseodymium(III) fluoride forms pale green crystals of trigonal system [ 6 ] (or hexagonal system [ 7 ] ), space group P 3c1, [ 6 ] (or P 6/mcm [ 7 ] ), cell parameters a = 0.7078 nm, c = 0.7239 nm, Z = 6, structure like cerium(III) fluoride (CeF 3 ).
Praseodymium(III) fluoride is a green, odourless, hygroscopic solid that is insoluble in water. [ 8 ]
Praseodymium(III) fluoride is used as a doping material for laser crystals. [ 9 ] | https://en.wikipedia.org/wiki/PrF3 |
Praseodymium diiodide is a chemical compound with the empirical formula of PrI 2 , consisting of praseodymium and iodine . It is an electride , with the ionic formula of Pr 3+ (I − ) 2 e − , [ 2 ] and therefore not a true praseodymium(II) compound.
Praseodymium diiodide can be obtained by reacting praseodymium(III) iodide with metallic praseodymium at 800 °C to 900 °C in an inert atmosphere: [ 3 ]
It can also be obtained by reacting praseodymium with mercury(II) iodide where praseodymium displaces mercury : [ 3 ]
Praseodymium diiodide was first obtained by John D. Corbett in 1961. [ 4 ]
Praseodymium diiodide is an opaque, bronze-coloured solid with a metallic lustre that is soluble in water. [ 3 ] The lustre and very high conductivity can be explained by the formulation {Pr III ,2I − ,e − }, with one electron per metal centre delocalised in a conduction band. [ 2 ]
The compound is extremely hygroscopic , and can only be stored and handled under carefully dried inert gas or under a high vacuum. [ citation needed ] In air it converts into hydrates by absorbing moisture, but these are unstable and more or less rapidly transform into oxide iodides with the evolution of hydrogen: [ citation needed ]
With water , these processes take place much faster. [ 3 ]
Praseodymium diiodide has five crystal structures , namely the MoSi 2 structure, the hexagonal MoS 2 structure, the trigonal MoS 2 structure, the cadmium chloride structure and the spinel structure. [ 5 ] Praseodymium diiodide with the cadmium chloride structure belongs to the trigonal crystal system , with the space group R 3 m (No. 166), lattice parameters a = 426.5 pm and c = 2247,1 pm; however, the spinel structure of praseodymium diiodide is cubic, [ 6 ] with space group F 4 3 (No. 216), and lattice parameter a = 1239.9 pm. [ 7 ] | https://en.wikipedia.org/wiki/PrI2 |
Praseodymium(III) phosphate is an inorganic compound with the chemical formula PrPO 4 .
Praseodymium(III) phosphate hemihydrate can be obtained by reacting praseodymium chloride and phosphoric acid : [ 2 ]
It can also be produced by reacting silicon pyrophosphate (SiP 2 O 7 ) and praseodymium(III,IV) oxide (Pr 6 O 11 ) at 1200 °C. [ 3 ]
Praseodymium(III) phosphate forms light green crystals in the monoclinic crystal system , with space group P2 1 /n and cell parameters a = 0.676 nm, b = 0.695 nm, c = 0.641 nm, β = 103.25°, Z = 4. [ 4 ] [ 5 ]
It forms a crystal hydrate of the composition PrPO 4 · n H 2 O, where n < 0.5, with light green crystals of hexagonal crystal system , space group P6 2 22 , and cell parameters a = 0.700 nm, c = 0.643 nm, Z = 3. [ 6 ] [ 7 ]
Praseodymium(III) phosphate reacts with sodium fluoride to obtain Na 2 PrF 2 (PO 4 ). [ 8 ] | https://en.wikipedia.org/wiki/PrPO4 |
pr is a command on various operating systems that is used to paginate or columnate computer files for printing. It can also be used to compare two files side by side, as an alternative to diff .
It is a required program in a POSIX -compliant environment and has been implemented by GNU as part of the GNU Core Utilities . The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. [ 1 ] It is also available in the OS-9 shell. [ 2 ] The pr command has also been ported to the IBM i operating system. [ 3 ]
This software article is a stub . You can help Wikipedia by expanding it .
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pr_(Unix) |
Prabhakar function is a certain special function in mathematics introduced by the Indian mathematician Tilak Raj Prabhakar in a paper published in 1971. [ 1 ] The function is a three-parameter generalization of the well known two-parameter Mittag-Leffler function in mathematics. The function was originally introduced to solve certain classes of integral equations . Later the function was found to have applications in the theory of fractional calculus and also in certain areas of physics. [ 2 ]
The one-parameter and two-parameter Mittag-Leffler functions are defined first. Then the definition of the three-parameter Mittag-Leffler function, the Prabhakar function, is presented. In the following definitions, Γ ( z ) {\displaystyle \Gamma (z)} is the well known gamma function defined by
In the following it will be assumed that α {\displaystyle \alpha } , β {\displaystyle \beta } and γ {\displaystyle \gamma } are all complex numbers.
The one-parameter Mittag-Leffler function is defined as [ 3 ]
The two-parameter Mittag-Leffler function is defined as [ 4 ]
The three-parameter Mittag-Leffler function (Prabhakar function) is defined by [ 1 ] [ 5 ] [ 6 ]
where ( γ ) n = γ ( γ + 1 ) … ( γ + n − 1 ) {\displaystyle (\gamma )_{n}=\gamma (\gamma +1)\ldots (\gamma +n-1)} .
The following special cases immediately follow from the definition. [ 2 ]
The following formula can be reduced to lower the value of the third parameter γ {\displaystyle \gamma } . [ 2 ]
The Prabhakar function is related to the Fox–Wright function by the following relation:
The derivative of the Prabhakar function is given by
There is a general expression for higher order derivatives. Let m {\displaystyle m} be a positive integer. The m {\displaystyle m} -th derivative of the Prabhakar function is given by
The following result is useful in applications.
The following result involving Prabhakar function is known.
The following result involving Laplace transforms plays an important role in both physical applications and numerical computations of the Prabhakar function.
The following function is known as the Prabhakar kernel in the literature. [ 2 ]
Given any function f ( t ) {\displaystyle f(t)} , the convolution of the Prabhakar kernel and f ( t ) {\displaystyle f(t)} is called the Prabhakar fractional integral:
Properties of the Prabhakar fractional integral have been extensively studied in the literature. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Prabhakar_function |
Practical Action (previously known as the Intermediate Technology Development Group , ITDG ) is a development charity registered in the United Kingdom [ 1 ] which works directly in four regions of the developing world – Latin America , East Africa , Southern Africa and South Asia , with particular concentration on Peru , Bolivia , Kenya , Sudan , Zimbabwe , Bangladesh and Nepal .
In these countries, Practical Action works with poor communities to develop appropriate technologies in renewable energy , food production , agro-processing , water , sanitation , small enterprise development, building and shelter, climate change adaptation and disaster risk reduction .
In 1965, economist and philosopher E. F. Schumacher had an article published in The Observer , [ 2 ] pointing out the limitations of aid based on the transfer of large-scale technologies to developing countries which did not have the resources to accommodate them. He argued that there should be a shift in emphasis towards intermediate technologies based on the needs and skills possessed by the people of developing countries.
Schumacher and a few of his associates, including George McRobie, Julia Porter, [ 3 ] Alfred Latham-Koenig and Professor Mansur Hoda , decided to create an "advisory centre" to promote the use of efficient labour-intensive techniques, and in 1966 the Intermediate Technology Development Group (ITDG) was born. [ 4 ] [ 5 ]
From its origins as a technical enquiry service, ITDG began to take a greater direct involvement in local projects. Following initial successes in farming, it developed working groups on energy, building materials and rural health, and soon grew to become an international organisation. The group now has seven regional offices, working on over 100 projects around the world, with a head office in the UK.
In July 2005, ITDG changed its working name to Practical Action , and since 2008 this has been its legal name. [ 6 ] The organisation produces a triannual magazine entitled 'Small World'. | https://en.wikipedia.org/wiki/Practical_Action |
In philosophy , practical reason is the use of reason to decide how to act . It contrasts with theoretical reason, often called speculative reason , the use of reason to decide what to believe . For example, agents use practical reason to decide whether to build a telescope , but theoretical reason to decide which of two theories of light and optics is the best.
Practical reason is understood by most philosophers as determining a plan of action. Thomistic ethics defines the first principle of practical reason as "good is to be done and pursued, and evil is to be avoided." [ 1 ] For Kant , practical reason has a law-abiding quality because the categorical imperative is understood to be binding one to one's duty rather than subjective preferences. Utilitarians tend to see reason as an instrument for the satisfactions of wants and needs.
In classical philosophical terms, it is very important to distinguish three domains of human activity: theoretical reason, which investigates the truth of contingent events as well as necessary truths ; practical reason, which determines whether a prospective course of action is worth pursuing; and productive or technical reason, which attempts to find the best means for a given end. Aristotle viewed philosophical activity as the highest activity of the human being and gave pride of place to metaphysics or wisdom. Since Descartes practical judgment and reasoning have been treated with less respect because of the demand for greater certainty and an infallible method to justify beliefs.
Practical reasoning is basically goal-directed reasoning from an agent's goal, and from some action selected as a means to carry out the goal, to the agent's reasoned decision to carry out the action. The agent can be a person or a technical device, such as a robot or a software device for multi-agent communications. It is a type of reasoning used all the time in everyday life and all kinds of technology where autonomous reasoning is required. Argumentation theorists have identified two kinds of practical reasoning: instrumental practical reasoning that does not explicitly take values into account, [ 2 ] and value-based practical reasoning . [ 3 ] The following argumentation scheme for instrumental practical reasoning is given in Walton, Reed & Macagno (2008) . The pronoun I represents an autonomous agent.
Critical questions
It can be seen from CQ5 that argumentation from consequences is closely related to the scheme for practical reasoning.
It has often been disputed in philosophy whether practical reasoning is purely instrumental or whether it needs to be based on values. Argument from values is combined with practical reasoning in the type of argumentation called value-based practical reasoning. [ 3 ] [ 4 ] [ 5 ] The following argumentation scheme for value-based practical reasoning is given in Atkinson, Bench-Capon & McBurney (2005 , pp. 2–3).
Practical reasoning is used in arguments, but also in explanations used to draw conclusions about an agent's goals, motives or intentions, based on reports of what the agent said or did.
Practical reasoning is centrally important in artificial intelligence, and also vitally important in many other fields such as law, medicine and engineering. It has been known as a distinctive type of argumentation as far back as Aristotle. [ citation needed ] | https://en.wikipedia.org/wiki/Practical_reason |
The practical syllogism is an instance of practical reasoning which takes the form of a syllogism , where the conclusion of the syllogism is an action. [ 1 ]
Aristotle discusses the notion of the practical syllogism within his treatise on ethics, his Nicomachean Ethics . A syllogism is a three-proposition argument consisting of a major premise stating some universal truth, a minor premise stating some particular truth, and a conclusion derived from these two premises. [ 2 ] The practical syllogism is a form of practical reasoning in syllogistic form, the conclusion of which is an action. An example might be that the major premise food cures hunger and the minor premise I am hungry leads to the practical conclusion of my eating food. Note that the conclusion here is not a third proposition, like I will eat , or the occurrence of an utterance like "I will eat," but is simply the act of eating. For this reason, practical syllogisms are only called syllogisms analogically. Since they do not consist of at least three propositions, they are not syllogisms properly speaking.
The theoretical reason gives no commands. The practical reason operates in the form of a practical syllogism, whose conclusion is epitactic or imperative .
Aristotle describes this syllogism as follows: All deliberate action is resolvable into a major and minor premise, from which the given action logically issues. The major premise is a general conception or moral maxim; the minor premise is a particular instance: and the conclusion is an action involved in subsuming the particular instance under the general conception or law. The conclusion is not an abstraction, as in the case of a theoretical syllogism, but consists in an action and is jussive , e.g.
Major premise: All men should take exercise;
Minor premise: I am a man;
Conclusion: I should take exercise;
or,
Major premise: Good students take notes;
Minor premise: I want to be a good student;
Conclusion: I should take notes.
Our English phrase 'acting on principle' is, as Sir Alexander Grant pointed out, [ citation needed ] the equivalent of Aristotle's practical syllogism. The practical syllogism operates in the sphere of conduct, of choice and the variable the sphere of necessary truth as is the case with the speculative reason, whose aim is demonstrable truth, whereas the aim of the practical reason is the good, the prudent, the desirable. The content of the conclusion as knowledge is the essential matter for the former; the content of the conclusion as motive is the essential matter for the latter. The main business of the former is with the understanding, of the latter, with the will ; the principle of ' sufficient reason' is related to the understanding as the principle of ' final cause' or motive is related to the will. In the practical syllogism obligation is vested in the conclusion, and the particular or minor premise is more cogent than the major, i.e. it is not the general law, but the application of the general law to a particular person, that stimulates to action.
The virtue characteristic of the practical reason is prudence or practical insight. "Prudence is neither a science nor an art; it cannot be a science because the sphere of action is that which is variable; it cannot be an art, for production is generically different from action;" and although Aristotle rejects the Socratic doctrine that virtue is knowledge (the sphere of moral life is pleasure and pain, rather than knowledge), he goes on to say that the "presence of the single virtue of prudence implies the presence of all the moral virtues. Prudence, however, is not itself the whole of moral virtue: "moral virtue makes us desire the end, while prudence makes us adopt the right means to the end." Although men act on general principles and laws, they do not perform general acts; all acts are particular; and so Aristotle, in describing the practical reason and its characteristic moral quality of prudence, further differentiates it from the theoretic reason by saying it is concerned immediately with particulars. [ 3 ] | https://en.wikipedia.org/wiki/Practical_syllogism |
Pradeep Mathur (born 1955) is an Indian organometallic and cluster chemist and the founder director of the Indian Institute of Technology, Indore . [ 1 ] [ 2 ] He is a former professor of the Indian Institute of Technology, Mumbai [ 3 ] and is known for his studies on mixed metal cluster compounds. [ 4 ] He is an elected fellow of the Indian Academy of Sciences [ 5 ] The Council of Scientific and Industrial Research , the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology , one of the highest Indian science awards, in 2000, for his contributions to chemical sciences. [ 6 ] He has also been honoured by the award of an honorary Doctor of Science degree by the University of Keele in the U.K.
Pradeep Mathur, born on 17 August 1955 in Teheran to Damyanti and Amrit Dayal. Mathur and his older brother, renowned physicist at TIFR, Deepak Mathur (married to Helen Mathur) were both brought up and educated in London whilst their father Amrit Dayal worked as a senior diplomatic official at the Indian High Commission in London and Accra. Mathur continued to live in England till he moved to Yale. He gained an honours degree at the University of North London in 1976 and secured a PhD from Keele University in 1981 before moving onto Yale University as a post-doctoral researcher. [ 7 ] Mathur chose to move to India and joined Indian Institute of Technology, Mumbai in 1984 as a member of the faculty of chemistry where he held several positions before reaching the position of a professor [ 8 ] and the head of the National Single Crystal X-ray Diffraction Facility. [ 3 ] When the Indian Institute of Technology, Indore was established in 2009, Mathur was appointed as its founder director. At the end of his first five-year term, his contract was extended for a second term and he continues to hold the position, [ 1 ] simultaneously serving as a professor of the department of chemistry. [ 9 ] He has been a visiting professor at University of Cambridge , University of Freiburg and University of Karlsruhe and has been associated with a number of scientific journals, viz. Organometallics , Journal of Organometallic Chemistry and Journal of Cluster Science as a member of their editorial boards. [ 1 ]
Mathur is married to Vinita and the couple have two daughters, Nehika and Saloni.
Mathur's researches were focused on the organometallic chemistry of mixed metal cluster compounds and he has developed synthetic strategies for introducing chalcogen bridges. [ 10 ] At IIT Mumbai, he handled projects related to the investigation of unusual metal mediated transformations and the interactions between the metal atoms and unsaturated organic species. [ 11 ] He has published his researches by way of chapters contributed to books authored by others [ 12 ] and over 180 peer-reviewed articles; [ 13 ] ResearchGate and Google Scholar , two online repositories have listed several of them. [ 14 ] [ 15 ] He has also guided 22 doctoral scholars in their studies. [ 1 ]
Mathur was a Fulbright scholar in 1995 [ 16 ] and the Indian Academy of Sciences elected him as a fellow in 1996. [ 5 ] The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize , one of the highest Indian science awards, in 2000. [ 17 ] He has also been honoured with an honorary D.Sc. degree by the University of Keele in the U.K. | https://en.wikipedia.org/wiki/Pradeep_Mathur_(scientist) |
The pragmatic theory of information is derived from Charles Sanders Peirce 's general theory of signs and inquiry . Peirce explored a number of ideas about information throughout his career. One set of ideas is about the "laws of information" having to do with the logical properties of information . Another set of ideas about "time and thought" have to do with the dynamic properties of inquiry . All of these ideas contribute to the pragmatic theory of inquiry. Peirce set forth many of these ideas very early in his career, periodically returning to them on scattered occasions until the end, and they appear to be implicit in much of his later work on the logic of science and the theory of signs, but he never developed their implications to the fullest extent. The 20th century thinker Ernst Ulrich and his wife Christine von Weizsäcker reviewed the pragmatics of information; [ 1 ] their work is reviewed by Gennert. [ 2 ]
The pragmatic information content is the information content received by a recipient; it is focused on the recipient and defined in contrast to Claude Shannon 's information definition, which focuses on the message. The pragmatic information measures the information received, not the information contained in the message. Pragmatic information theory requires not only a model of the sender and how it encodes information, but also a model of the receiver and how it acts on the information received. The determination of pragmatic information content is a precondition for the determination of the value of information .
Claude Shannon and Warren Weaver completed the viewpoint on information encoding in the seminal paper by Shannon A Mathematical Theory of Communication , [ 3 ] with two additional viewpoints (B and C): [ 4 ]
Pragmatics of communication is the observable effect a communication act (here receiving a message) has on the actions of the recipient. The pragmatic information content of a message may be different for different recipients or the same message may have the same content. Weizsäcker used the concept of novelty and irrelevance to separate information which is pragmatically useful or not. [ 1 ] Algebraically, the pragmatic information content must satisfy three rules:
More recently, Weinberger formulated a quantitative theory of pragmatic information. In contrast to standard information theory that says nothing about the semantic content of information, Weinberger's theory attempts to measure the amount of information actually used in making a decision. Included in Weinberger 's paper is a demonstration that his version of pragmatic information increases over the course of time in a simple model of evolution known as the quasispecies model . This is demonstrably not true for the standard measure of information. [ 5 ]
The acquisition of the information and the use of it in decision making can be separated. The use of acquired information to make a decision is, in the general case, an optimization in an uncertain situation (which is included in Weinberger's theory). For deterministc rule based decisions, the agent can be formalized as an algebra with a set of operations and the state changes when these operations are executed (no optimization applied). The pragmatic information such an agent picks up from a messages is the transformation of the tokens in the message into operations the recipient is capable of.
Frank used agent-based models and the theory of autonomous agents with cognitive abilities (see multi-agent system ) to operationalize measuring pragmatic information content. The transformation between the received message and the executed message is defined by the agents rules; the pragmatic information content is the information in the transformed message, measured by the methods given by Shannon. The general case can be split in (deterministic) actions to change the information the agent already has and the optimal decision using this information. To measure the pragmatic information content is relevant to assess the value of information received by an agent and influences the agents willingness to pay for information - not measured by Shannon communication information content, but by the received pragmatic information content.
The rules for the transformation of a received message to the pragmatic content drop information already available;
(1) information already known is ignored and
(2) elaborated messages can be reduced to the agents action, reducing the information content when the receiver understands and can execute actions more powerful than the encoding calls for.
The transformation achieves the three rules mentioned above (EQ, SAME, DIFF). [ 6 ]
The action of the agent can be taken as just 'updating its knowledge store" and the actual decision by the agent, optimizing the result, is modeled separately, as, for example, done in Weinberger's approach.
A familiar application may clarify the approach: Different car navigation systems produce different instructions but if they manage to guide you to the same location, their pragmatic information content must be the same (despite different information content when measured with Shannon's measure (SAME). A novice in the area may need all instructions received - the pragmatic information content and the (minimally encoded) information content of the message is the same. An experienced driver will ignore all "follow the road" and "go straight" instructions, thus the pragmatic information content is lower; for a driver with knowledge of the area, large parts of the instructions may be subsumed by simple instructions "drive to X"; typically, only the last part ("the last mile") of the instructions are meaningful - the pragmatic information content is smaller, because much knowledge is already available (DIFF). Messages with more or less verbiage have for this user the same pragmatic content (SAME). [ 6 ] | https://en.wikipedia.org/wiki/Pragmatic_theory_of_information |
A prairie remnant commonly refers to grassland areas in the Western and Midwestern United States and Canada that remain to some extent undisturbed by European settlement . Prairie remnants range in levels of degradation, but nearly all contain at least some semblance of the pre-Columbian local plant assemblage of a particular region. Prairie remnants have become increasingly threatened due to the threats of agricultural, urban and suburban development, pollution, fire suppression, and the incursion of invasive species. [ 1 ] [ 2 ]
Prairie remnants offer valuable varieties of rare species, thus providing excellent opportunities for restoration ecology projects. Many restoration projects are simply recreations of prairie habitats, but restoring prairie remnants preserves more complete ecological structures that were naturally created after the end of the last ice age. Remnants can also be platforms for additional surrounding ecological restoration activities. [ 2 ]
It has been estimated that 99% of tallgrass prairie habitats in North America have been destroyed mainly due to conversion to agriculture. [ 3 ] Tallgrass prairies are generally composed of a mixture of native grasses, sedges, and forbs but are usually dominated by grasses.
The shortgrass prairie is an ecosystem located in the Great Plains of North America. The prairie includes lands to the west as far as the eastern foothills of the Rocky Mountains and extends east as far as Nebraska and north into Saskatchewan . The prairie stretches through parts of Alberta, Wyoming, Montana, North Dakota, South Dakota, and Kansas and passes south through the high plains of Colorado, Oklahoma, Texas, and New Mexico.
This article about geography terminology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prairie_remnant |
Prairie restoration is a conservation effort to restore prairie lands that were destroyed due to industrial, agricultural , commercial, or residential development. [ 1 ] The primary aim is to return areas and ecosystems to their previous state before their depletion. [ 2 ]
In the United States , after the Black Hawk War had subsided in the mid-1830s, settlers from northern Europe and north east of the US made a home for themselves. [ 3 ] They plowed up the tallgrasses and wild flowers in the area. By 1849 most species of prairie grass had disappeared to make room for crops (i.e.: soybeans, corn, etc.). [ 3 ] [ 4 ] Restored prairies and the grasses that survived the 1800 plowing represent only a fragment of the abundant verdure that once covered the midsection of North America from western Ohio to the Rockies and from southern Canada to Texas. [ 3 ] As an example, the U.S. state of Illinois alone once held over 35,000 square miles (91,000 km 2 ) of prairie land and now just 3 square miles (7.8 km 2 ) of that original prairie land is left. The over farming of this land as well as periods of drought and its exposure to the elements (no longer bound together by the tall grasses) was responsible for the Dust Bowls in the 1930s. [ 5 ]
Issues of erosion , and waning biodiversity have arisen in areas once populated by prairie grass species. [ 6 ] So in efforts of restoration, in Europe, when restoring previous crop land with prairie grasses, the most frequently used techniques involve: spontaneous succession, sowing seed mixtures, transfer of plant material, topsoil removal and transfer. [ 6 ] When maintaining these tall grasses, prescribed fire is a popular method. [ 7 ] It encourages taller and stronger regrowth as well as the recycling of nutrients in the soil. [ 8 ]
Although not fully able to restore the full diversity of an area, restoration efforts aid the thriving of the natural ecosystems . This is further improved by the specific reintroduction of key organisms from the native plants microbiome . [ 9 ] Prairie soil also effectively stores carbon. As carbon sinks, they work as a vital regulator of carbon in the atmosphere through carbon sequestration (withdrawal), and the carbon benefits the sustenance of diverse species in the prairie ecosystem. [ 10 ]
Erosion occurs when surface pressures wear away the material of the Earth’s crust. [ 11 ] Particularly with land previously dominated by prairie grasses, the loss of the tallgrass extensive fibrous root system left the soil exposed and unbound. [ 5 ] Ecologically , prairie restoration aids in conservation of earth's topsoil , which is often exposed to erosion from wind and rain (worsened by climate change's heavier and frequent rain) when prairies are plowed under to make way for new commerce. [ 12 ] Conversely, much more of the prairie lands have become the fertile fields on which cereal crops of corn , barley and wheat are grown. [ 13 ] Continued erosion reduces the long term productivity of the soil. [ 14 ]
Prairie restoration reintroduces this root system that once again binds the soil, strengthening it against water erosion through adequate water filtration. [ 11 ]
Prairie soil is also useful for carbon sequestration. [ 15 ] Carbon dioxide is a heat trapping gas, and 40% of it is produced by humans and remains in the atmosphere thus worsening the effects of global warming. [ 16 ] Prairie grass collects this carbon from the atmosphere through photosynthesis and stores it in its soil. [ 17 ] [ 18 ] When left undisturbed, the prairie soil acts as a Carbon sink, meaning it absorbs more carbon from the atmosphere than it releases. [ 16 ] [ 17 ]
Many prairie plants are also highly resistant to drought , temperature extremes, disease, and native insect pests. [ 19 ] They are frequently used for xeriscaping projects in arid regions of the American West. [ 20 ] On a larger scale, communities and corporations are creating areas of restored prairies which in turn will store organic carbon in the soil and help maintain the biodiversity of the 3000 plus species that count on the grasslands for food and shelter. [ 21 ] Research in Walnut Creek Restoration (Iowa) on the contribution of recently converted land (from row crop to prairie grass), shows the improvement in ground water quality over the span of 10 years. [ 22 ] By changing the type of plant and quality, the issue of groundwater contamination (of unwanted chemicals, as a result of climate change and an issue of water security) can be alleviated. [ 23 ]
A restoration project of prairie lands can on a large or small scale level. [ 24 ] Backyard prairie restoration can enrich soil , combat erosion, and absorb water in excessive rainfalls . [ 25 ] An example of a backyard prairie restoration is known as a micro-prairie. [ 26 ] Micro-prairies are mini prairie habitats that typically consist of less than one acre, usually isolated and surrounded by developed or urban land. [ 26 ] These small-scale prairie habitats, offer various benefits, particularly in developed or urban areas where natural prairies may have been lost or fragmented. [ 26 ] This miniature ecosystem can provide habitat for a diversity of native plant and animal species specifically adapted to prairie environments, thus helping to sustain local biodiversity. [ 26 ] Prairie flowers are attractive to native butterflies and other pollinators . [ 27 ] These pollinators have evolved to rely on specific types of plants for their nectar and pollen needs. [ 28 ] Micro-prairies can attract native pollinators in several ways. First, they can provide a diverse array of native plants that are adapted to the local environment as a food sources for native pollinators. [ 27 ] By including a variety of native plants in a micro-prairie restoration projects, it is possible to create an attractive and beneficial habitat for these insects. Second, micro-prairies can offer specific nesting sites for native pollinators. [ 21 ] Many species of bees and other pollinators require specific types of nesting sites, such as hollow plant stems or burrows. [ 21 ] Features such as bee boxes or native grasses provide suitable nesting sites for breeding and survival. [ 21 ] Finally, micro-prairies can serve as refuge from habitat loss and pesticide use. Pollinators are highly susceptible to these threats, and by restoring small-scale prairie habitats in developed or urban areas, it is possible to create secure environments for critical insects. [ 21 ]
Additionally, micro-prairie plants contribute to carbon sequestration, which can improve water quality by absorbing and filtering pollutants, and transforming soil compositions. [ 26 ] The ability to carbon sequester is due to the deep root system of prairie grasses that can store large amounts of carbon in the soil. [ 29 ] Prairie grasses also have a high rate of biomass production, that can capture and store carbon at a fast rate. [ 29 ] Research has shown that prairie plants are also adapted to nutrient-poor soils, promote nutrient cycling, and contribute to soil organic matter which are essential for maintaining soil fertility and structure. [ 30 ] Prairie plants leaves have a large surface area that can trap airborne pollutants such as dust, pollen, and particulate matter. [ 31 ] The diverse community of microorganisms in prairie soils can break down and metabolize pollutants into less harmful ones. [ 32 ] Prairie plants can absorb pollutants such as heavy metals and excess nutrients from water and soil that might enter into an ecosystem. [ 31 ]
In general micro-prairies have been found to have a positive impact on local ecosystems and biodiversity. However, some studies have identified potential negative effects of micro-prairies under certain circumstances. For example, studies show that when non-native plant species are introduced into a micro-prairie, they can outcompete native plants and reduce biodiversity. [ 33 ] Secondly, if not properly maintained, backyard prairies can overgrow and create a fire risk. [ 33 ] Implementing a safe and regular mowing or burning schedule is a recommended management practice to avoid fire risk and excessive plant growth. [ 33 ] Lastly, standing water in a micro prairie can provide a breeding habitat for mosquitoes. [ 33 ] Proper design and maintenance of micro-prairies can prevent stagnant water from accumulating and attracting mosquitoes. [ 33 ]
In urban areas, permaculture is well-suited for reconstructing micro-prairies due to the complementary approach to system design and management. [ 26 ] Permaculture is a form of ecological engineering inspired by natural ecosystems which utilize sustainable architecture and horticulture. [ 26 ] Utilizing permaculture principles allows for the possibility to create sustainable micro-prairie systems that benefit both the environment and society in urban contexts. For example, the permaculture system emphasizes diversity in plant and animal species, that sustain a healthy ecosystem. [ 34 ] Through observing and learning from natural ecosystems, permaculture practitioners apply designs that mimic natural patterns. [ 34 ] Companion planting is another principle in permaculture, where different plants are grown together to benefit each other. [ 34 ] Furthermore, micro-prairies serve as valuable tool for education and outreach. Micro-prairies allow people to learn about prairie ecosystems and the importance of preserving and restoring native habitats responsibly.
Some prominent tallgrass prairie grasses include big bluestem , indiangrass , and switchgrass . [ 35 ] Midgrass and shortgrass species include little bluestem , side oats grama , and buffalograss . [ 36 ] Many of the diverse prairie forbs (herbaceous, non- graminoid flowering plants) are structurally specialized to resist herbaceous grazers such as American bison . [ 36 ] Some have hairy leaves that may help deter the cold and prevent excessive evaporation . [ 37 ] Many of forbs contain secondary compounds that were discovered by indigenous peoples and are still used widely today. [ 38 ]
Early prairie restoration efforts tended to focus largely on a few dominant species, typically grasses, with little attention to seed source. [ 39 ] With experience, later restorers have realized the importance of obtaining a broad mix of species and using local ecotype seed. [ 39 ]
In Europe, when restoring previous crop land with prairie grasses, the most frequently used techniques involve: spontaneous succession, sowing seed mixtures, transfer of plant material, topsoil removal and transfer. [ 40 ] Spontaneous succession is an effective technique when quick results are not expected and where there is high availability of propagules . [ 40 ] Sowing mixtures can be low or high diversity, referring to the variety of seeds. Low diversity mixtures are great for restoring large areas in a short amount of time. [ 40 ] High diversity mixtures (because of their cost and success rate) are used for smaller areas. [ 40 ] A mixture of large low diversity areas and small high diversity areas are good rich source patches for the spontaneous colonization of neighboring areas. [ 40 ] This allows for the possibility of continued natural restoration. [ 40 ]
Fire is a big component to the success of grasslands, large or small as it is a fire dependent ecosystem. [ 41 ] Controlled burns , with a permit, are recommended every 4–8 years (after two growth seasons) to burn away dead plants; prevent certain other plants from encroaching (such as trees) and release and recycling nutrients into the ground to encourage new growth. [ 8 ] [ 7 ] A much more wildlife habitat friendly alternative to burning every 4–8 years is to burn 1/4 to 1/8 of a tract every year. [ 42 ] [ 43 ] This will leave wildlife a home every year and still accomplish the task of burning. The Native Americans may also have used the burns to control pests such as ticks . [ 44 ] These prescribed burn motivate grasses to grow taller, produce more seed, and flower more abundantly. [ 7 ] If controlled burns are not possible, rotational mowing is recommended as a substitute. [ 45 ]
One of the newer methods available is holistic management , which uses livestock as a substitute for the keystone species such as bison . [ 46 ] Some sites have bison which supports the conservation of the species . This allows the rotational mowing to be done by animals which in turn mimics nature more closely. [ 47 ] Holistic management also can use fire as a tool, but in a more limited way and in combination with the mowing done by animals. [ 46 ] [ 47 ] [ 48 ] In parts of Central Asia, grazing is a human factor that greatly affects the progression of grasses. [ 49 ]
In 1990, in South Africa, de Lange and Boucher reported the use of smoke to promote seed germination among prairie grasses. [ 50 ] It was shown to help break dormancy of certain seeds. Since then this technique has been promoted throughout South Africa, parts of Australia and North America. [ 50 ]
Some popular prairie restoration projects have been completed and maintained by conservation departments, such as Midewin National Tallgrass Prairie , located in Wilmington, Illinois . [ 51 ] This restoration project is administered by the U.S. Department of Agriculture , Forest Service and the Illinois Department of Natural Resources . [ 51 ] It sits on part of the Joliet Army Ammunition Plant , specifically on an area once contaminated from TNT manufacturing . Since 1997, the project has opened some 15,000 acres (61 km 2 ) of restored prairie to the public. [ 51 ]
Another large restoration project finds its home on the ample area of Fermilab ; a U.S. governmental atomic accelerator laboratory located in Batavia, Illinois . [ 52 ] Fermilab's 6,800 acres (28 km 2 ) sit a top fertile farmland and the prairie restoration project consists of approximately 1,000 acres (4.0 km 2 ) of that. [ 53 ] This project began in 1975 and continues today with the help of Fermilab employees and many community teachers, botanists and volunteers. [ 53 ] | https://en.wikipedia.org/wiki/Prairie_restoration |
Pranav Sharma (प्रणव शर्मा) is an astronomer and science historian known for his work on the history of the Indian Space Program. He has curated Space Museum at the B. M. Birla Science Centre (Hyderabad, India). [ 1 ] Sharma was in charge of the history of the Indo-French scientific partnership project supported by the Embassy of France in India. [ 2 ] He is a national award-winning science communicator [ 3 ] and has extensively worked in the popularization of astronomy education in India. [ 4 ] [ 5 ]
He also served as the Policy and Diplomacy Advisor to United Nations International Computation Centre [ 6 ] and Member Secretary (Policy, Transdisciplinary Disruptive Science, and Communications) for G20-Science20. [ 7 ]
Sharma is the Co-Lead on the History of Data-Driven Astronomy Project , Adjunct Researcher at Raman Research Institute , Scientific Advisor to Arc Ventures, Science Diplomacy Consultant to Indian National Science Academy , and Visiting Faculty at The Druk Gyalpo's Institute, [ 8 ] Bhutan. He is an Associate Member of Astronomical Society of India . [ 9 ]
He has co-authored the book Essential Astrophysics: Interstellar Medium to Stellar Remnants, CRC Press, 2019. [ 10 ]
This article about an Indian scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pranav_Sharma |
In fluid mechanics the Prandtl condition was suggested by the German physicist Ludwig Prandtl to identify possible boundary layer separation points of incompressible fluid flows. [ 1 ]
In the case of normal shock , flow is assumed to be in a steady state and thickness of shock is very small. It is further assumed that there is no friction or heat loss at the shock (because heat transfer is negligible because it occurs on a relatively small surface). It is customary in this field to denote x as the upstream and y as the downstream condition.
Since the mass flow rate from the two sides of the shock are constant, the mass balance becomes,
ρ x . U x = ρ y . U y {\displaystyle \rho _{x}.U_{x}=\rho _{y}.U_{y}}
As there is no external force applied, momentum is conserved. Which give rises to the equation
P x − P y = ρ x . U x 2 − ρ y . U y 2 {\displaystyle P_{x}-P_{y}=\rho _{x}.{U_{x}}^{2}-\rho _{y}.{U_{y}}^{2}}
Because heat flow is negligible, the process can be treated as adiabatic. So the energy equation will be
C p . T x + U x 2 2 = C p . T y + U y 2 2 {\displaystyle C_{p}.T_{x}+{\frac {{U_{x}}^{2}}{2}}=C_{p}.T_{y}+{\frac {{U_{y}}^{2}}{2}}}
From the equation of state for perfect gas, P = ρ R T {\displaystyle P=\rho RT}
As the temperature from both sides of the shock wave is discontinuous, the speed of sound is different in these adjoining medium. So it is convenient to define the star mach number that will be independent of the specific mach number. From star condition, the speed of sound at the critical condition can also be a good reference velocity. Speed of sound at that temperature is,
c ∗ = k R T ∗ {\displaystyle c^{*}={\sqrt {kRT^{*}}}}
And additional Mach number which is independent of specific mach number is,
M ∗ = U c ∗ = c M c ∗ {\displaystyle M^{*}={\frac {U}{c^{*}}}={\frac {cM}{c^{*}}}}
Since energy remains constant across the shock,
c 2 k − 1 + U 2 2 = c ∗ 2 k − 1 + c ∗ 2 2 = ( k + 1 ) c ∗ 2 2 ( k − 1 ) {\displaystyle {\frac {c^{2}}{k-1}}+{\frac {U^{2}}{2}}={\frac {{c^{*}}^{2}}{k-1}}+{\frac {{c^{*}}^{2}}{2}}={\frac {(k+1){c^{*}}^{2}}{2(k-1)}}}
dividing mass equation by momentum equation we will get
c 1 2 k U 1 + U 1 = c 2 2 k U 2 + U 2 {\displaystyle {\frac {{c_{1}}^{2}}{kU_{1}}}+U_{1}={\frac {{c_{2}}^{2}}{kU_{2}}}+U_{2}}
From above equations,
1 k U 1 [ ( k + 1 ) c ∗ 2 2 − ( k − 1 ) U 1 2 2 ] + U 1 = 1 k U 2 [ ( k + 1 ) c ∗ 2 2 − ( k − 1 ) U 2 2 2 ] + U 2 {\displaystyle {\frac {1}{kU_{1}}}\left[{\frac {(k+1){c^{*}}^{2}}{2}}-{\frac {(k-1){U_{1}}^{2}}{2}}\right]+U_{1}={\frac {1}{kU_{2}}}\left[{\frac {(k+1){c^{*}}^{2}}{2}}-{\frac {(k-1){U_{2}}^{2}}{2}}\right]+U_{2}}
it will give rises to
U 1 . U 2 = a ∗ 2 {\displaystyle U_{1}.U_{2}={a^{*}}^{2}}
Which is called the prandtl condition in normal shock | https://en.wikipedia.org/wiki/Prandtl_condition |
The Prandtl number ( Pr ) or Prandtl group is a dimensionless number , named after the German physicist Ludwig Prandtl , defined as the ratio of momentum diffusivity to thermal diffusivity . [ 1 ] The Prandtl number is given as:
P r = ν α = momentum diffusivity thermal diffusivity = μ / ρ k / ( c p ρ ) = c p μ k {\displaystyle \mathrm {Pr} ={\frac {\nu }{\alpha }}={\frac {\mbox{momentum diffusivity}}{\mbox{thermal diffusivity}}}={\frac {\mu /\rho }{k/(c_{p}\rho )}}={\frac {c_{p}\mu }{k}}}
where:
Note that whereas the Reynolds number and Grashof number are subscripted with a scale variable, the Prandtl number contains no such length scale and is dependent only on the fluid and the fluid state. The Prandtl number is often found in property tables alongside other properties such as viscosity and thermal conductivity .
The mass transfer analog of the Prandtl number is the Schmidt number and the ratio of the Prandtl number and the Schmidt number is the Lewis number .
For most gases over a wide range of temperature and pressure, Pr is approximately constant. Therefore, it can be used to determine the thermal conductivity of gases at high temperatures, where it is difficult to measure experimentally due to the formation of convection currents. [ 1 ]
Typical values for Pr are:
For air with a pressure of 1 bar, the Prandtl numbers in the temperature range between −100 °C and +500 °C can be calculated using the formula given below. [ 2 ] The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 0.1% from the literature values.
P r air = 10 9 1.1 ⋅ ϑ 3 − 1200 ⋅ ϑ 2 + 322000 ⋅ ϑ + 1.393 ⋅ 10 9 {\displaystyle \mathrm {Pr} _{\text{air}}={\frac {10^{9}}{1.1\cdot \vartheta ^{3}-1200\cdot \vartheta ^{2}+322000\cdot \vartheta +1.393\cdot 10^{9}}}} ,
where ϑ {\displaystyle \vartheta } is the temperature in Celsius.
The Prandtl numbers for water (1 bar) can be determined in the temperature range between 0 °C and 90 °C using the formula given below. [ 2 ] The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 1% from the literature values.
P r water = 50000 ϑ 2 + 155 ⋅ ϑ + 3700 {\displaystyle \mathrm {Pr} _{\text{water}}={\frac {50000}{\vartheta ^{2}+155\cdot \vartheta +3700}}}
Small values of the Prandtl number, Pr ≪ 1 , means the thermal diffusivity dominates. Whereas with large values, Pr ≫ 1 , the momentum diffusivity dominates the behavior.
For example, the listed value for liquid mercury indicates that the heat conduction is more significant compared to convection , so thermal diffusivity is dominant.
However, engine oil with its high viscosity and low heat conductivity, has a higher momentum diffusivity as compared to thermal diffusivity. [ 3 ]
The Prandtl numbers of gases are about 1, which indicates that both momentum and heat dissipate through the fluid at about the same rate. Heat diffuses very quickly in liquid metals ( Pr ≪ 1 ) and very slowly in oils ( Pr ≫ 1 ) relative to momentum. Consequently thermal boundary layer is much thicker for liquid metals and much thinner for oils relative to the velocity boundary layer .
In heat transfer problems, the Prandtl number controls the relative thickness of the momentum and thermal boundary layers . When Pr is small, it means that the heat diffuses quickly compared to the velocity (momentum). This means that for liquid metals the thermal boundary layer is much thicker than the velocity boundary layer.
In laminar boundary layers, the ratio of the thermal to momentum boundary layer thickness over a flat plate is well approximated by [ 4 ]
where δ t {\displaystyle \delta _{t}} is the thermal boundary layer thickness and δ {\displaystyle \delta } is the momentum boundary layer thickness.
For incompressible flow over a flat plate, the two Nusselt number correlations are asymptotically correct: [ 4 ]
where R e {\displaystyle \mathrm {Re} } is the Reynolds number . These two asymptotic solutions can be blended together using the concept of the Norm (mathematics) : [ 4 ] | https://en.wikipedia.org/wiki/Prandtl_number |
In fluid dynamics , Prandtl–Batchelor theorem states that if in a two-dimensional laminar flow at high Reynolds number closed streamlines occur, then the vorticity in the closed streamline region must be a constant . A similar statement holds true for axisymmetric flows. The theorem is named after Ludwig Prandtl and George Batchelor . Prandtl in his celebrated 1904 paper stated this theorem in arguments, [ 1 ] George Batchelor unaware of this work proved the theorem in 1956. [ 2 ] [ 3 ] The problem was also studied in the same year by Richard Feynman and Paco Lagerstrom [ 4 ] and by W.W. Wood in 1957. [ 5 ]
At high Reynolds numbers , the two-dimensional problem governed by two-dimensional Euler equations reduce to solving a problem for the stream function ψ {\displaystyle \psi } , which satisfies
where ω {\displaystyle \omega } is the only non-zero vorticity component in the z {\displaystyle z} -direction of the vorticity vector. As it stands, the problem is ill-posed since the vorticity distribution ω ( ψ ) {\displaystyle \omega (\psi )} can have infinite number of possibilities, all of which satisfies the equation and the boundary condition. This is not true if no streamline is closed, in which case, every streamline can be traced back to the boundary ∂ D {\displaystyle \partial D} where ψ {\displaystyle \psi } and therefore its corresponding vorticity ω ( ψ ) {\displaystyle \omega (\psi )} are prescribed. The difficulty arises only when there are some closed streamlines inside the domain that does not connect to the boundary and one may suppose that at high Reynolds numbers, ω ( ψ ) {\displaystyle \omega (\psi )} is not uniquely defined in regions where closed streamlines occur. The Prandtl–Batchelor theorem, however, asserts that this is not the case and ω ( ψ ) {\displaystyle \omega (\psi )} is uniquely defined in such cases, through an examination of the limiting process R e → ∞ {\displaystyle Re\rightarrow \infty } properly.
The steady, non-dimensional vorticity equation in our case reduces to
Integrate the equation over a surface S {\displaystyle S} lying entirely in the region where we have closed streamlines, bounded by a closed contour C {\displaystyle C}
The integrand in the left-hand side term can be written as ∇ ⋅ ( ω u ) {\displaystyle \nabla \cdot (\omega \mathbf {u} )} since ∇ ⋅ u = 0 {\displaystyle \nabla \cdot \mathbf {u} =0} . By divergence theorem, one obtains
where n {\displaystyle \mathbf {n} } is the outward unit vector normal to the contour line element d l {\displaystyle dl} . The left-hand side integrand can be made zero if the contour C {\displaystyle C} is taken to be one of the closed streamlines since then the velocity vector projected normal to the contour will be zero, that is to say u ⋅ n = 0 {\displaystyle \mathbf {u} \cdot \mathbf {n} =0} . Thus one obtains
This expression is true for finite but large Reynolds number since we did not neglect the viscous term before.
Unlike the two-dimensional inviscid flows, where ω = ω ( ψ ) {\displaystyle \omega =\omega (\psi )} since u ⋅ ∇ ω = 0 {\displaystyle \mathbf {u} \cdot \nabla \omega =0} with no restrictions on the functional form of ω {\displaystyle \omega } , in the viscous flows, ω ≠ ω ( ψ ) {\displaystyle \omega \neq \omega (\psi )} . But for large but finite R e {\displaystyle \mathrm {Re} } , we can write ω = ω ( ψ ) + s m a l l c o r r e c t i o n s {\displaystyle \omega =\omega (\psi )+{\rm {small\ corrections}}} , and this small corrections become smaller and smaller as we increase the Reynolds number. Thus, in the limit R e → ∞ {\displaystyle \mathrm {Re} \rightarrow \infty } , in the first approximation (neglecting the small corrections), we have
Since d ω / d ψ {\displaystyle d\omega /d\psi } is constant for a given streamline, we can take that term outside the integral,
One may notice that the integral is negative of the circulation since
where we used the Stokes theorem for circulation and ω = − ∇ 2 ψ {\displaystyle \omega =-\nabla ^{2}\psi } . Thus, we have
The circulation around those closed streamlines is not zero (unless the velocity at each point of the streamline is zero with a possible discontinuous vorticity jump across the streamline) . The only way the above equation can be satisfied is only if
i.e., vorticity is not changing across these closed streamlines, thus proving the theorem. Of course, the theorem is not valid inside the boundary layer regime. This theorem cannot be derived from the Euler equations. [ 6 ] | https://en.wikipedia.org/wiki/Prandtl–Batchelor_theorem |
The Prandtl–Glauert singularity is a theoretical construct in flow physics, often incorrectly used to explain vapor cones in transonic flows.
It is the prediction by the Prandtl–Glauert transformation that infinite pressures would be experienced by an aircraft as it approaches the speed of sound . Because it is invalid to apply the transformation at these speeds, the predicted singularity does not emerge. The incorrect association is related to the early-20th-century misconception of the impenetrability of the sound barrier .
The Prandtl–Glauert transformation assumes linearity (i.e. a small change will have a small effect that is proportional to its size). This assumption becomes inaccurate toward Mach 1 and is entirely invalid in places where the flow reaches supersonic speeds, since sonic shock waves are instantaneous (and thus manifestly non-linear) changes in the flow. Indeed, one assumption in the Prandtl–Glauert transformation is approximately constant Mach number throughout the flow, and the increasing slope in the transformation indicates that very small changes will have a very strong effect at higher Mach numbers, thus violating the assumption, which breaks down entirely at the speed of sound.
This means that the singularity featured by the transformation near the sonic speed ( M=1 ) is not within the area of validity. The aerodynamic forces are calculated to approach infinity at the so-called Prandtl–Glauert singularity ; in reality, the aerodynamic and thermodynamic perturbations do get amplified strongly near the sonic speed, but they remain finite and a singularity does not occur. The Prandtl–Glauert transformation is a linearized approximation of compressible, inviscid potential flow. As the flow approaches sonic speed, the nonlinear phenomena dominate within the flow, which this transformation completely ignores for the sake of simplicity.
The Prandtl–Glauert transformation is found by linearizing the potential equations associated with compressible, inviscid flow. For two-dimensional flow, the linearized pressures in such a flow are equal to those found from incompressible flow theory multiplied by a correction factor. This correction factor is given below: [ 1 ] Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle c_{p} = \frac {c_{p0}} {\sqrt {|1-{M_{\infty}}^2|}}} where
This formula is known as "Prandtl's rule", and works well up to low-transonic Mach numbers ( M < ~0.7). However, note the limit: Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lim_{M_{\infty} \to 1 } c_p = \infty}
This obviously nonphysical result (of an infinite pressure) is known as the Prandtl–Glauert singularity.
The reason that observable clouds sometimes form around high speed aircraft is that humid air is entering low-pressure regions, which also reduces local density and temperature sufficiently to cause water to supersaturate around the aircraft and to condense in the air, thus creating clouds. The clouds vanish as soon as the pressure increases again to ambient levels.
In the case of objects at transonic speeds, the local pressure increase happens at the shock wave location.
Condensation in free flow does not require supersonic flow. Given sufficiently high humidity, condensation clouds can be produced in purely subsonic flow over wings, or in the cores of wing tips, and even within, or around vortices themselves. This can often be observed during humid days on aircraft approaching or departing airports. [ 2 ] | https://en.wikipedia.org/wiki/Prandtl–Glauert_singularity |
A supersonic expansion fan, technically known as Prandtl–Meyer expansion fan , a two-dimensional simple wave , is a centered expansion process that occurs when a supersonic flow turns around a convex corner. The fan consists of an infinite number of Mach waves , diverging from a sharp corner. When a flow turns around a smooth and circular corner, these waves can be extended backwards to meet at a point.
Each wave in the expansion fan turns the flow gradually (in small steps). It is physically impossible for the flow to turn through a single "shock" wave because this would violate the second law of thermodynamics . [ 1 ]
Across the expansion fan, the flow accelerates (velocity increases) and the Mach number increases, while the static pressure , temperature and density decrease. Since the process is isentropic , the stagnation properties (e.g. the total pressure and total temperature) remain constant across the fan.
The theory was described by Theodor Meyer on his thesis dissertation in 1908, along with his advisor Ludwig Prandtl , who had already discussed the problem a year before. [ 2 ] [ 3 ]
The expansion fan consists of an infinite number of expansion waves or Mach lines . [ 4 ] The first Mach line is at an angle μ 1 = arcsin ( 1 M 1 ) {\displaystyle \mu _{1}=\arcsin \left({\frac {1}{M_{1}}}\right)} with respect to the flow direction, and the last Mach line is at an angle μ 2 = arcsin ( 1 M 2 ) {\displaystyle \mu _{2}=\arcsin \left({\frac {1}{M_{2}}}\right)} with respect to final flow direction. Since the flow turns in small angles and the changes across each expansion wave are small, the whole process is isentropic. [ 1 ] This simplifies the calculations of the flow properties significantly. Since the flow is isentropic, the stagnation properties like stagnation pressure ( p 0 {\displaystyle p_{0}} ), stagnation temperature ( T 0 {\displaystyle T_{0}} ) and stagnation density ( ρ 0 {\displaystyle \rho _{0}} ) remain constant. The final static properties are a function of the final flow Mach number ( M 2 {\displaystyle M_{2}} ) and can be related to the initial flow conditions as follows, where γ {\displaystyle \gamma } is the heat capacity ratio of the gas (1.4 for air):
The Mach number after the turn ( M 2 {\displaystyle M_{2}} ) is related to the initial Mach number ( M 1 {\displaystyle M_{1}} ) and the turn angle ( θ {\displaystyle \theta } ) by,
where, ν ( M ) {\displaystyle \nu (M)\,} is the Prandtl–Meyer function . This function determines the angle through which a sonic flow ( M = 1) must turn to reach a particular Mach number (M). Mathematically,
By convention, ν ( 1 ) = 0. {\displaystyle \nu (1)=0.\,}
Thus, given the initial Mach number ( M 1 {\displaystyle M_{1}} ), one can calculate ν ( M 1 ) {\displaystyle \nu (M_{1})\,} and using the turn angle find ν ( M 2 ) {\displaystyle \nu (M_{2})\,} . From the value of ν ( M 2 ) {\displaystyle \nu (M_{2})\,} one can obtain the final Mach number ( M 2 {\displaystyle M_{2}} ) and the other flow properties. The velocity field in the expansion fan, expressed in polar coordinates ( r , ϕ ) {\displaystyle (r,\phi )} are given by [ 5 ]
h {\displaystyle h} is the specific enthalpy and h 0 {\displaystyle h_{0}} is the stagnation specific enthalpy.
As Mach number varies from 1 to ∞ {\displaystyle \infty } , ν {\displaystyle \nu \,} takes values from 0 to ν max {\displaystyle \nu _{\text{max}}\,} , where
This places a limit on how much a supersonic flow can turn through, with the maximum turn angle given by,
One can also look at it as follows. A flow has to turn so that it can satisfy the boundary conditions. In an ideal flow, there are two kinds of boundary condition that the flow has to satisfy,
If the flow turns enough so that it becomes parallel to the wall, we do not need to worry about pressure boundary condition. However, as the flow turns, its static pressure decreases (as described earlier). If there is not enough pressure to start with, the flow won't be able to complete the turn and will not be parallel to the wall. This shows up as the maximum angle through which a flow can turn. The lower the Mach number is to start with (i.e. small M 1 {\displaystyle M_{1}} ), the greater the maximum angle through which the flow can turn.
The streamline which separates the final flow direction and the wall is known as a slipstream (shown as the dashed line in the figure). Across this line there is a jump in the temperature, density and tangential component of the velocity (normal component being zero). Beyond the slipstream the flow is stagnant (which automatically satisfies the velocity boundary condition at the wall). In case of real flow, a shear layer is observed instead of a slipstream, because of the additional no-slip boundary condition .
Impossibility of expanding a flow through a single "shock" wave:
Consider the scenario shown in the adjacent figure. As a supersonic flow turns, the normal component of the velocity increases ( w 2 > w 1 {\displaystyle w_{2}>w_{1}} ), while the tangential component remains constant ( v 2 = v 1 {\displaystyle v_{2}=v_{1}} ). The corresponding change is the entropy ( Δ s = s 2 − s 1 {\displaystyle \Delta s=s_{2}-s_{1}} ) can be expressed as follows,
where, R {\displaystyle R} is the universal gas constant, γ {\displaystyle \gamma } is the ratio of specific heat capacities, ρ {\displaystyle \rho } is the static density, p {\displaystyle p} is the static pressure, s {\displaystyle s} is the entropy, and w {\displaystyle w} is the component of flow velocity normal to the "shock". The suffix "1" and "2" refer to the initial and final conditions respectively.
Since w 2 > w 1 {\displaystyle w_{2}>w_{1}} , this would mean that Δ s < 0 {\displaystyle \Delta s<0} . Since this is not possible, it means that it is impossible to turn a flow through a single shock wave. The argument may be further extended to show that such an expansion process can occur only if we consider a turn through infinite number of expansion waves in the limit Δ s → 0 {\displaystyle \Delta s\rightarrow 0} . Accordingly, an expansion process is an isentropic process .
Mach lines are a concept usually encountered in 2-D supersonic flows (i.e. M ≥ 1 {\displaystyle M\geq 1} ). They are a pair of bounding lines which separate the region of disturbed flow from the undisturbed part of the flow. These lines occur in pairs and are oriented at an angle
with respect to the direction of motion (also known as the Mach angle ). In case of 3-D flow field, these lines form a surface known as Mach cone , with Mach angle as the half angle of the cone.
To understand the concept better, consider the case sketched in the figure. We know that when an object moves in a flow, it causes pressure disturbances (which travel at the speed of sound, also known as Mach waves ). The figure shows an object moving from point A to B along the line AB at supersonic speeds ( u > c {\displaystyle u>c} ). By the time the object reaches point B, the pressure disturbances from point A have travelled a distance c·t and are now at circumference of the circle (with centre at point A). There are infinite such circles with their centre on the line AB, each representing the location of the disturbances due to the motion of the object. The lines propagating outwards from point B and tangent to all these circles are known as Mach lines.
Note: These concepts have a physical meaning only for supersonic flows ( u ≥ c {\displaystyle u\geq c} ). In case of subsonic flows, the disturbances will travel faster than the source and the argument of the arcsin ( ) {\displaystyle \arcsin()} function will be greater than one. | https://en.wikipedia.org/wiki/Prandtl–Meyer_expansion_fan |
In aerodynamics , the Prandtl–Meyer function describes the angle through which a flow turns isentropically from sonic velocity (M=1) to a Mach (M) number greater than 1 . The maximum angle through which a sonic ( M = 1) flow can be turned around a convex corner is calculated for M = ∞ {\displaystyle \infty } . For an ideal gas , it is expressed as follows,
where ν {\displaystyle \nu \,} is the Prandtl–Meyer function, M {\displaystyle M} is the Mach number of the flow and γ {\displaystyle \gamma } is the ratio of the specific heat capacities .
By convention, the constant of integration is selected such that ν ( 1 ) = 0. {\displaystyle \nu (1)=0.\,}
As Mach number varies from 1 to ∞ {\displaystyle \infty } , ν {\displaystyle \nu \,} takes values from 0 to ν max {\displaystyle \nu _{\text{max}}\,} , where
where, θ {\displaystyle \theta } is the absolute value of the angle through which the flow turns, M {\displaystyle M} is the flow Mach number and the suffixes "1" and "2" denote the initial and final conditions respectively.
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prandtl–Meyer_function |
Praseodymium(III) fluoride is an inorganic compound with the formula PrF 3 , being the most stable fluoride of praseodymium .
The reaction between praseodymium(III) nitrate and sodium fluoride will obtain praseodymium(III) fluoride as a green crystalline solid: [ 3 ]
There are also literature reports on the reaction between chlorine trifluoride and various oxides of praseodymium (Pr 2 O 3 , Pr 6 O 11 and PrO 2 ), where praseodymium(III) fluoride is the only product. The reaction between bromine trifluoride and praseodymium oxide left in the air for a period of time also produces praseodymium(III) fluoride, but the reaction is incomplete; the reaction between praseodymium(III) oxalate hydrate and bromine trifluoride can obtain praseodymium(III) fluoride, and carbon is also produced from this reaction. [ 4 ] Praseodymium(III) fluoride can also be obtained by reacting praseodymium oxide and sulfur hexafluoride at 584 °C. [ 5 ]
Praseodymium(III) fluoride forms pale green crystals of trigonal system [ 6 ] (or hexagonal system [ 7 ] ), space group P 3c1, [ 6 ] (or P 6/mcm [ 7 ] ), cell parameters a = 0.7078 nm, c = 0.7239 nm, Z = 6, structure like cerium(III) fluoride (CeF 3 ).
Praseodymium(III) fluoride is a green, odourless, hygroscopic solid that is insoluble in water. [ 8 ]
Praseodymium(III) fluoride is used as a doping material for laser crystals. [ 9 ] | https://en.wikipedia.org/wiki/Praseodymium(III)_fluoride |
Praseodymium(III) nitride is a binary inorganic compound of praseodymium and nitrogen . [ 2 ] Its chemical formula is PrN . [ 3 ] The compound forms black crystals, and reacts with water.
Praseodymium(III) nitride can be prepared by the reaction of nitrogen and metallic praseodymium on heating:
It can also be prepared from the reaction of ammonia and praseodymium metal on heating:
Praseodymium(III) nitride forms black crystals of a cubic system . The space group is Fm 3 m , [ 4 ] with cell parameter a = 0.5165 nm, Z = 4, its structure similar to that of sodium chloride (NaCl).
The compound is readily hydrolyzed with water and reacts with acids.
The compound is used in high-end electric and semiconductor products, and as a raw material to produce phosphor. Also it is used as a magnetic material and sputtering target material. [ 5 ] | https://en.wikipedia.org/wiki/Praseodymium(III)_nitride |
Praseodymium(III) oxalate is an inorganic compound , a salt of praseodymium metal and oxalic acid , with the chemical formula C 6 O 12 Pr 2 . [ 1 ] The compound forms light green crystals that are insoluble in water. It also forms crystalline hydrates.
Praseodymium(III) oxalate can be prepared from the reaction of soluble praseodymium salts with oxalic acid :
Praseodymium(III) oxalate forms crystalline hydrates (light green crystals): Pr 2 (C 2 O 4 ) 3 •10H 2 O. The crystalline hydrate decomposes stepwise when heated: [ 2 ] [ 3 ]
Praseodymium(III) oxalate is used as an intermediate product in the synthesis of praseodymium. It is also applied to colour some glasses and enamels. If mixed with certain other materials, the compound paints glass intense yellow. [ 4 ] | https://en.wikipedia.org/wiki/Praseodymium(III)_oxalate |
Praseodymium(IV) fluoride (also praseodymium tetrafluoride ) is a binary inorganic compound , a highly oxidised metal salt of praseodymium and fluoride [ 1 ] with the chemical formula PrF 4 .
Praseodymium(IV) fluoride can be prepared by the effect of krypton difluoride on praseodymium(IV) oxide : [ 2 ]
Praseodymium(IV) fluoride can also be made by the dissolution of sodium hexafluoropraseodymate(IV) in liquid hydrogen fluoride : [ 3 ]
Praseodymium(IV) fluoride forms light yellow crystals. The crystal structure is anticubic and isomorphic to that of uranium tetrafluoride UF 4 . It decomposes when heated:
Due to the high normal potential of the tetravalent praseodymium cations (Pr3+ / Pr4+: +3.2 V), praseodymium(IV) fluoride decomposes in water, releasing oxygen, O 2 . | https://en.wikipedia.org/wiki/Praseodymium(IV)_fluoride |
The Prato reaction is a particular example of the well-known 1,3-dipolar cycloaddition of azomethine ylides to olefins . [ 1 ] In fullerene chemistry this reaction refers to the functionalization of fullerenes and nanotubes . The amino acid sarcosine reacts with paraformaldehyde when heated at reflux in toluene to an ylide which reacts with a double bond in a 6,6 ring position in a fullerene via a 1,3-dipolar cycloaddition to yield a N-methylpyrrolidine derivative or pyrrolidinofullerene or pyrrolidino[[3,4:1,2]] [60]fullerene in 82% yield based on C 60 conversion. [ 2 ]
In one application a liquid fullerene is obtained when the pyrrolidone substituent is a 2,4,6-tris(alkyloxy)phenyl group [ 3 ] although a small amount of solvent is still possibly present.
This reaction was derived from the work of Otohiko Tsuge [ 4 ] on Azomethine Ylide Chemistry developed in the late 1980s. Tsuge's work was applied to fullerenes by Maurizio Prato , thus gaining the name.
It is known that the Prato reaction is very useful to functionalize endohedral metallofullerenes. Prato reaction on M3N@C80 gives initially [5,6]-adduct (kinetic product), which convert upon heating to the [6,6]-adduct (thermodynamic product). [ 5 ] The rate of isomerization is highly dependent on the metal size inside the carbon cage. [ 6 ]
This method is also used in the functionalization of single wall nanotubes. [ 7 ] When the amino acid is modified with a glycine chain the resulting nanotubes are soluble in common solvents such chloroform and acetone . Another characteristic of the treated nanotubes is their larger aggregate dimensions compared to untreated nanotubes.
In an alternative method a nanotube addition is performed with the N-oxide of trimethylamine and LDA [ 8 ] at reflux in tetrahydrofuran with an efficiency of 1 functional group in 16 nanotube carbon atoms. When the amine also carries an aromatic group such as pyrene the reaction takes place even at room temperature because this group preorganizes itself to the nanotube surface prior to reaction by pi stacking .
Just as in other fullerene reactions like the Bingel reaction or Diels-Alder reactions this reaction can be reversed. A thermal cycloelimination of a pyrrolidinofullerene with a strong dipolarophile such as maleic acid and a catalyst such as Wilkinson's catalyst or copper triflate in 1,2-dichlorobenzene at reflux 8 to 18 hours regenerates the pristine C 60 fullerene. [ 9 ] The dipolarophile is required in a 30 fold excess and traps the ylide driving the reaction to completion. The N-methylpyrrolidine derivative reacts poorly (5% yield) and for a successful reaction the nitrogen ring also requires substitution in the α-position with methyl , phenyl or carboxylic ester groups.
Other methods have been investigated: by applying heat [ 10 ] or via a combination of ionic liquid and microwave chemistry . [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Prato_reaction |
Pre-B-cell leukemia homeobox (PBX) refers to a family of transcription factors. [ 1 ]
Types include:
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pre-B-cell_leukemia_homeobox |
Pre-algebra is a common name for a course taught in middle school mathematics in the United States, usually taught in the 6th, 7th, 8th, or 9th grade. [ 1 ] The main objective of it is to prepare students for the study of algebra . Usually, Algebra I is taught in the 8th or 9th grade . [ 2 ]
As an intermediate stage after arithmetic , pre-algebra helps students pass specific conceptual barriers. Students are introduced to the idea that an equals sign , rather than just being the answer to a question as in basic arithmetic, means that two sides are equivalent and can be manipulated together. They may also learn how numbers, variables , and words can be used in the same ways. [ 3 ]
Subjects taught in a pre-algebra course may include:
Pre-algebra may include subjects from geometry , especially to further the understanding of algebra in applications to area and volume .
Pre-algebra may also include subjects from statistics to identify probability and interpret data.
Proficiency in pre-algebra is an indicator of college success. It can also be taught as a remedial course for college students. [ 5 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it .
This article relating to education is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pre-algebra |
Pre-charge of the powerline voltages in a high voltage DC application is a preliminary mode which limits the inrush current during the power up procedure.
A high-voltage system with a large capacitive load can be exposed to high electric current during initial turn-on. This current, if not limited, can cause considerable stress or damage to the system components. In some applications, the occasion to activate the system is a rare occurrence, such as in commercial utility power distribution. In other systems such as vehicle applications, pre-charge will occur with each use of the system, multiple times per day. Precharging is implemented to increase the lifespan of electronic components and increase reliability of the high voltage system.
Inrush currents into capacitive components are a key concern in power-up stress to components. When DC input power is applied to a capacitive load, the step response of the voltage input will cause the input capacitor to charge. The capacitor charging starts with an inrush current and ends with an exponential decay down to the steady state condition. When the magnitude of the inrush peak is very large compared to the maximum rating of the components, then component stress is to be expected.
The current into a capacitor is known to be I = C ( d V / d T ) {\displaystyle I=C(dV/dT)} : the peak inrush current will depend upon the capacitance C and the rate of change of the voltage (dV/dT). The inrush current will increase as the capacitance value increases, and the inrush current will increase as the voltage of the power source increases. This second parameter is of primary concern in high voltage power distribution systems. By their nature, high voltage power sources will deliver high voltage into the distribution system. Capacitive loads will then be subject to high inrush currents upon power-up. The stress to the components must be understood and minimized.
The objective of a pre-charge function is to limit the magnitude of the inrush current into capacitive loads during power-up. This may take several seconds depending on the system. In general, higher voltage systems benefit from longer pre-charge times during power-up.
Consider an example where a high voltage source powers up a typical electronics control unit which has an internal power supply with 11000 μF input capacitance. When powered from a 28 V source, the inrush current into the electronics unit would approach 31 amperes in 10 milliseconds. If that same circuit is activated by a 610 V source, then the inrush current would approach 670 A in 10 milliseconds. It is wise not to allow unlimited inrush currents from high voltage power distribution system activation into capacitive loads: instead the inrush current should be controlled to avoid power-up stress to components.
The functional requirement of the high voltage pre-charge circuit is to minimize the peak current out from the power source by slowing down the dV / dT of the input power voltage such that a new "pre-charge mode" is created. The inductive loads on the distribution system must be switched off during the pre-charge mode, due to the dI / dT dependency. While pre-charging, the system voltage will rise slowly and controllably with power-up current never exceeding the maximum allowed value. As the circuit voltage approaches near steady state, then the pre-charge function is complete. Normal operation of a pre-charge circuit is to terminate pre-charge mode when the circuit voltage is 90% or 95% of the operating voltage. Upon completion of pre-charging, the pre-charge resistance is switched out of the power supply circuit and returns to a low impedance power source for normal mode. The high voltage loads are then powered up sequentially.
The simplest inrush-current limiting system, used in many consumer electronics devices, is a NTC resistor . When cold, its high resistance allows a small current to pre-charge the reservoir capacitor . After it warms up, its low resistance more efficiently passes the working current.
Many active power factor correction systems also include soft start .
If the example circuit from before is used with a pre-charge circuit which limits the dV / dT to less than 600 volts per second, then the inrush current will be reduced from 670 amperes to 7 amperes. This is a "kinder and gentler" way to activate a high voltage DC power distribution system.
The primary benefit of avoiding component stress during power-up is to realize a long system operating life due to reliable and long lasting components.
There are additional benefits: pre-charging reduces the electrical hazards which may occur when the system integrity is compromised due to hardware damage or failure. Activating the high voltage DC system into a short circuit or a ground fault or into unsuspecting personnel and their equipment can have undesired effects. Arc flash will be minimized if a pre-charge function slows down the activation time of a high voltage power-up. A slow pre-charge will also reduce the voltage into a faulty circuit which builds up while the system diagnostics come on-line. This allows a diagnostic shut down before the fault is fully realized in worst case proportions.
In cases where unlimited inrush current is large enough to trip the source circuit breaker , a slow precharge may even be required to avoid the nuisance trip.
Pre-charging is commonly used in battery electric vehicle applications. The current to the motor is regulated by a controller that employs large capacitors in its input circuit. [ 1 ] Such systems typically have contactors (a high-current relay ) to disable the system during inactive periods and to act as an emergency disconnect should the motor current regulator fail in an active state. Without pre-charge the high voltage across the contactors and inrush current can cause a brief arc which will cause pitting of the contacts. Pre-charging the controller input capacitors (typically to 90 to 95 percent of applied battery voltage) eliminates the pitting problem. The current to maintain the charge is so low that some systems apply the pre-charge at all times other than when charging batteries, while more complex systems apply pre-charge as part of the starting sequence and will defer main contactor closure until the pre-charge voltage level is detected as sufficiently high. | https://en.wikipedia.org/wiki/Pre-charge |
Pre-construction services are services that are offered to support owners, architects, and engineers in making decisions. [ 1 ] They are used in planning a construction project before the actual construction begins. The stage where these services are offered is called pre-construction or "pre-con".
In the long-established design-bid-build method of construction project delivery, a project would be entirely designed before being built. [ 2 ] This resulted in a package of plans and specifications which formed the construction documents. After the acquisition of the construction documents, the owner would solicit bids (or tenders) from general contractors that are willing to manage the project. The owner will award the project to a successful bidder; In the design-bid-build method of construction project delivery, would be based on price. [ 1 ]
Often the early feasibility, studies, and design development are supported by construction cost estimators, who prepare cost estimates . Usually cost estimates are supported by conceptual or rough-order-of-magnitude (ROM) estimates. [ 1 ] The design then starts with a schematic design (SD) stage, followed by a design development stage, and culminates in a construction document state.
In the design-bid-build system, there is a construction bidding process that falls between the owner's original solicitation and the summation of a bid by a contractor. This only occurs after the completion of the construction documents. During this process, a contractor's construction cost estimator prepares a detailed estimate which will be needed for the summation of a bid. The creation of an estimate requires the inclusion of the cost of labor, overhead, profit, and equipment. The construction documents will also play a major role in estimating. [ 3 ]
Pre-construction services grew out of construction cost estimating to encompass the other activities in planning a project. [ 4 ] The intent is to work with the project's owner to help deliver a satisfactory project that meets the owner's objectives. In addition to estimating, the preconstruction team participates in design decisions, evaluations, studies, value engineering , value analysis, scheduling, constructability reviews, and more. Design costs, permitting, land acquisition, and life-cycle costs may also be evaluated. In delivering pre-construction services, general contractors or construction managers may also be negotiating for project construction services. Often this may be accomplished by agreeing on a guaranteed maximum price (GMP) for the project. The firm then delivers the project. Typically the owner and the firm share any cost savings realized during construction.
Pre-construction services cover a large range of jobs and activities that need to be completed before the start of a project. These services are to help lay out the foundation for the project, this helps the success of a construction project by making progress be as smooth at possible. [ 1 ] Services may happen at different points depending on what type of project delivery method was used. These services are offered by project managers, engineers, contractors, and architects. Pre-construction services often cover the following:
This is but a few examples of services that are offered. Other services are but are not limited to design management, life-cycle analysis, feasibility study, constructability, bidding, and subcontracting analysis. | https://en.wikipedia.org/wiki/Pre-construction_services |
In chemical kinetics , the pre-exponential factor or A factor is the pre-exponential constant in the Arrhenius equation (equation shown below), an empirical relationship between temperature and rate coefficient . It is usually designated by A when determined from experiment, while Z is usually left for collision frequency . The pre-exponential factor can be thought of as a measure of the frequency of properly oriented collisions. It is typically determined experimentally by measuring the rate constant k {\displaystyle k} at a particular temperature and fitting the data to the Arrhenius equation. The pre-exponential factor is generally not exactly constant, but rather depends on the specific reaction being studied and the temperature at which the reaction is occurring. [ 1 ]
A = k e − E a R T = k e E a R T {\displaystyle A={\frac {k}{e^{-{\frac {E_{a}}{RT}}}}}=ke^{\frac {E_{a}}{RT}}}
The units of the pre-exponential factor A are identical to those of the rate constant and will vary depending on the order of the reaction. For a first-order reaction, it has units of s −1 . For that reason, it is often called frequency factor .
According to collision theory , the frequency factor, A, depends on how often molecules collide when all concentrations are 1 mol/L and on whether the molecules are properly oriented when they collide. Values of A for some reactions can be found at Collision theory .
According to transition state theory , A can be expressed in terms of the entropy of activation of the reaction.
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pre-exponential_factor |
In the philosophy of mathematics , the pre-intuitionists is the name given by L. E. J. Brouwer to several influential mathematicians who shared similar opinions on the nature of mathematics. The term was introduced by Brouwer in his 1951 lectures at Cambridge where he described the differences between his philosophy of intuitionism and its predecessors: [ 1 ]
Of a totally different orientation [from the "Old Formalist School" of Dedekind , Cantor , Peano , Zermelo , and Couturat , etc.] was the Pre-Intuitionist School, mainly led by Poincaré , Borel and Lebesgue . These thinkers seem to have maintained a modified observational standpoint for the introduction of natural numbers , for the principle of complete induction [...] For these, even for such theorems as were deduced by means of classical logic, they postulated an existence and exactness independent of language and logic and regarded its non-contradictority as certain, even without logical proof. For the continuum, however, they seem not to have sought an origin strictly extraneous to language and logic.
The pre-intuitionists, as defined by L. E. J. Brouwer , differed from the formalist standpoint in several ways, [ 1 ] particularly in regard to the introduction of natural numbers, or how the natural numbers are defined/denoted. For Poincaré , the definition of a mathematical entity is the construction of the entity itself and not an expression of an underlying essence or existence.
This is to say that no mathematical object exists without human construction of it, both in mind and language.
This sense of definition allowed Poincaré to argue with Bertrand Russell over Giuseppe Peano's axiomatic theory of natural numbers .
Peano's fifth axiom states:
This is the principle of complete induction , which establishes the property of induction as necessary to the system. Since Peano's axiom is as infinite as the natural numbers , it is difficult to prove that the property of P does belong to any x and also x + 1. What one can do is say that, if after some number n of trials that show a property P conserved in x and x + 1, then we may infer that it will still hold to be true after n + 1 trials. But this is itself induction. And hence the argument begs the question .
From this Poincaré argues that if we fail to establish the consistency of Peano's axioms for natural numbers without falling into circularity, then the principle of complete induction is not provable by general logic .
Thus arithmetic and mathematics in general is not analytic but synthetic . Logicism thus rebuked and Intuition is held up. What Poincaré and the Pre-Intuitionists shared was the perception of a difference between logic and mathematics that is not a matter of language alone, but of knowledge itself.
It was for this assertion, among others, that Poincaré was considered to be similar to the intuitionists. For Brouwer though, the Pre-Intuitionists failed to go as far as necessary in divesting mathematics from metaphysics, for they still used principium tertii exclusi (the " law of excluded middle ").
The principle of the excluded middle does lead to some strange situations. For instance, statements about the future such as "There will be a naval battle tomorrow" do not seem to be either true or false, yet . So there is some question whether statements must be either true or false in some situations . To an intuitionist this seems to rank the law of excluded middle as just as un rigorous as Peano's vicious circle.
Yet to the Pre-Intuitionists this is mixing apples and oranges. For them mathematics was one thing (a muddled invention of the human mind, i.e. , synthetic), and logic was another (analytic).
The above examples only include the works of Poincaré , and yet Brouwer named other mathematicians as Pre-Intuitionists too; Borel and Lebesgue . Other mathematicians such as Hermann Weyl (who eventually became disenchanted with intuitionism, feeling that it places excessive strictures on mathematical progress) and Leopold Kronecker also played a role—though they are not cited by Brouwer in his definitive speech.
In fact Kronecker might be the most famous of the Pre-Intuitionists for his singular and oft quoted phrase, "God made the natural numbers; all else is the work of man."
Kronecker goes in almost the opposite direction from Poincaré, believing in the natural numbers but not the law of the excluded middle. He was the first mathematician to express doubt on non-constructive existence proofs that state that something must exist because it can be shown that it is "impossible" for it not to. | https://en.wikipedia.org/wiki/Pre-intuitionism |
A preamplifier , also known as a preamp , is an electronic amplifier that converts a weak electrical signal into an output signal strong enough to be noise-tolerant and strong enough for further processing, or for sending to a power amplifier and a loudspeaker . [ 1 ] Without this, the final signal would be noisy or distorted. They are typically used to amplify signals from analog sensors such as microphones and pickups . [ 2 ] Because of this, the preamplifier is often placed close to the sensor to reduce the effects of noise and interference .
An ideal preamp will be linear (have a constant gain through its operating range) and have high input impedance (requiring only a minimal amount of current to sense the input signal) and low output impedance (when current is drawn from the output there is minimal change in the output voltage). It is used to boost the signal strength to drive the cable to the main instrument without significantly degrading the signal-to-noise ratio (SNR). The noise performance of a preamplifier is critical. According to Friis's formula , when the gain of the preamplifier is high, the SNR of the final signal is determined by the SNR of the input signal and the noise figure of the preamplifier.
Three basic types of preamplifiers are available:
In an audio system, they are typically used to amplify signals from analog sensors to line level . The second amplifier is typically a power amplifier (power amp). The preamplifier provides voltage gain (e.g., from 10 mV to 1 V) but no significant current gain. The power amplifier provides the higher current necessary to drive loudspeakers . For these systems, some common sensors are microphones , instrument pickups , and phonographs . Preamplifiers are often integrated into the audio inputs on mixing consoles , DJ mixers , and sound cards . They can also be stand-alone devices. | https://en.wikipedia.org/wiki/Preamplifier |
Until the late 1950s, the Precambrian was not believed to have hosted multicellular organisms. However, with radiometric dating techniques, it has been found that fossils initially found in the Ediacara Hills in Southern Australia date back to the late Precambrian. These fossils are body impressions of organisms shaped like disks, fronds and some with ribbon patterns that were most likely tentacles .
These are the earliest multicellular organisms in Earth's history, despite the fact that unicellularity had been around for a long time before that. The requirements for multicellularity were embedded in the genes of some of these cells, specifically choanoflagellates . These are thought to be the precursors for all animals. They are highly related to sponges (Porifera), which are the simplest multicellular animals.
In order to understand the transition to multicellularity during the Precambrian, it is important to look at the requirements for multicellularity—both biological and environmental.
The Precambrian dates from the beginning of Earth's formation (4.6 billion years ago) to the beginning of the Cambrian Period , 539 million years ago. [ 1 ] [ 2 ] The Precambrian consists of the Hadean , Archaean and Proterozoic eons. [ 1 ] Specifically, this article examines the Ediacaran , when the first multicellular bodies are believed to have arisen, as well as what caused the rise of multicellularity. [ 3 ] This time period arose after the Snowball Earth of the mid Neoproterozoic. The "Snowball Earth" was a period of worldwide glaciation , which is believed to have served as a population bottleneck for the subsequent evolution of multicellular organisms. [ 4 ]
The Earth formed around 4.6 billion years ago, with unicellular life emerging somewhat later after the cessation of the Late Heavy Bombardment, a period of intense asteroid impacts possibly caused by migration of the gas giant planets to their current orbits, however multicellularity and bodies are a relatively recent event in Earth's history. [ 5 ] Bodies first started appearing towards the end of the Precambrian Era, during the Ediacaran period. The fossils of the Ediacaran period were first found in Southern Australia in the Ediacara Hills , hence the name. However these fossils were initially thought to be part of the Cambrian and it wasn't until the late 1950s when Martin Glaessner identified the fossils as actually being from the Precambrian era. The fossils that were found date to about 600 million years ago and are found in a variety of morphologies. [ 5 ]
For more information, see Ediacaran biota .
The fossils found that date back to the Precambrian lack distinct structures since there were no skeletal forms during this period. [ 5 ] Skeletons did not arise until the Cambrian Period when oxygen levels increased. This is because skeletons require collagen , which uses Vitamin C as a cofactor, which requires oxygen . [ 6 ] For more information on the rise of oxygen see the section on oxygen. The majority of fossils from this Era come from either Mistaken Point on the East Coast of Canada or the Ediacara Hills in Southern Australia . [ 5 ]
Most of the fossils are found as impressions of soft-bodied organisms in the shape of disks, ribbons or fronds. [ 3 ] [ 5 ] There are also trace fossils that provide evidence that some of these Precambrian organisms were most-likely worm-like creatures that were locomotive. [ 7 ] Most of these fossils lack any recognizable heads, mouths or digestive organs, and are thought to have fed via absorptive mechanisms and symbiotic relationships with chemoautotrophs ( Chemotroph ), photoautotrophs ( Phototroph ) or osmoautotrophs. [ 1 ] The ribbon-like fossils resemble tentacled organisms, and are thought to have fed by capturing prey. The frondose fossils resemble sea pens and other cnidarians . The trace fossils suggest that there were annelid type creatures, and the disk fossils resemble sponges. Despite these similarities, much of the identification is speculation since the fossils do not show very distinct structures. Other fossils do not resemble any known lineages. [ 1 ]
Many of the organisms, such as Charnia , found in Mistaken Point , were not like any organisms seen today. They had distinct bodies, however were lacking a head and digestive regions. Rather their body was organized in a very simple, fractal -like branching pattern. [ 8 ] Every element of the body was finely branched and grew by repetitive branching. This allowed the organism to have a large surface area and maximize nutrient absorption without needing a mouth and digestive system . However, there was minimal genetic information and therefore did not have the requirements that would have allowed them to evolve more efficient feeding techniques. This means they were probably outcompeted by other organisms, and thus became extinct. [ 8 ]
The organisms found in the Ediacaran Hills in Southern Australia displayed either radially symmetric body plans or, one organism, Spriggina, displayed the first bilateral symmetry. The Ediacaran Hills are thought to have once had a shallow reef where more light could penetrate the bottom of the ocean floor. This allowed for more diversity of organisms. The organisms found here resemble relatives of the cnidarians , mollusks or annelids . [ 8 ]
Charnia fossils were originally found in the Charnwood Forest in England , hence named Charnia . [ 8 ] These fossils are from marine organisms that lived on the bottom of the ocean floor. The fossils have a fractal body plan and were frond shaped, meaning they resembled broad-leafed plants such as ferns. However they could not have been plants since they resided in the dark depths of the ocean floor. In Charnwood Forest , Charnia was found as an isolated species, however there were many more fossils found on the East Coast of Canada in Mistaken Point in Newfoundland. Charnia was attached to the bottom of the ocean floor, and was strongly current aligned. This is seen because there are disk-like shapes at the bottom of the Charnia fossil, which show where Charnia was tethered, and all the nearby fossils are facing the same direction. These fossils at Mistaken Point were preserved well under volcanic ash and layers of soft mud. [ 8 ] It has been determined via radiometric dating of the fossils that Charnia must have lived around 565 million years ago. [ 4 ] [ 9 ]
Dickinsonia fossils are another notable fossil from the Ediacaran period, found in Southern Australia and Russia . [ 10 ] It remains unknown what type of organism Dickinsonia was; however, it has been considered a polychaete , turbellarian/annelid worm, jellyfish , polyp, protist, lichen or mushroom . [ 10 ] They were preserved in quartz sandstones, and date back to around 550 million years ago. Dickinsonia were soft-bodied organisms, that show some evidence of very slow movement. [ 4 ] There are faint, circular imprints in the rock which follow a path, and then following the same path there is a more definite circular imprint of the same size. This indicates that the organism probably moved slowly from one feeding area to the next and absorbed nutrients. It is speculated that the organism probably had very small appendages that allowed it to move much like starfish do today. [ 11 ]
Spriggina fossils represent the first known organisms with a bilaterally symmetric body plan. They had a head, tail and almost identical halves. [ 3 ] They probably had sensory organs in the head and digestive organs in the tail which would have allowed them to find food more efficiently. They were capable of locomotion, which gave them an advantage over other organisms from that era that were either tethered to the bottom of the ocean floor or moved very slowly. Spriggina was soft bodied, which leave the fossils as faint imprints. It is most likely related to annelids, however there is some speculation that it could be related to arthropods since it somewhat resembles trilobite fossils. [ 3 ] [ 5 ]
The Ediacaran fossils of Southern Australia contain trace fossils, which indicate that there were motile benthic organisms. The organisms that produced the traces in the sediments were all worm-like sediment feeders or detritus feeders ( Detritivore ). There are a few trace fossils, which resemble arthropod trails. Evidence suggests that arthropod-like organisms existed during the Precambrian . This evidence is in the type of trails left behind; specifically one specimen that shows six pairs of symmetrically placed impressions, which resemble trilobite walking trails. [ 7 ]
For the majority of Earth’s history life has been unicellular . However, unicellular organisms had the ingredients in them for multicellularity to arise. Despite having the ingredients for multicellularity, organisms were restricted due to the lack of hospitable environmental conditions. The rise of oxygen (The Great Oxygenation Event ) led organisms to be able to develop more complex body plans. In order for multicellularity to have occurred, organisms must have been capable of cellular communication , aggregation, and specialized functions. The transition to multicellularity that began the evolution of animals from protozoa is one of the most poorly understood of history’s life events. Understanding choanoflagellates and their relation to sponges is important when positing theories on the origins of multicellularity [ 12 ]
Choanoflagellates , also called "collar-flagellates" are unicellular protists that exist in both freshwaters and oceans. [ 13 ] Choanoflagellates have a spherical (or ovoid) cell body and a flagellum that is surrounded by a collar composed of actin microvilli. [ 13 ] [ 14 ] The flagellum is used to facilitate movement and food intake. As the flagellum beats, it takes in water through the microvilli attached to the collar, which helps filter out unwanted bacteria and other tiny food particles. [ 13 ] Choanoflagellates are composed of approximately 150 species and reproduce by simple division. [ 15 ]
(also known as Choanoflagellate Proterospongia)
The choanoflagellate Salpingoeca rosetta is a rare freshwater eukaryote consisting of a number of cells embedded in a jelly-like matrix. This organism demonstrates a very primitive level of cell differentiation and specialization. [ 15 ] This is seen with flagellated cells and their collar structures that move the cell colony through the water, while the amoeboid cells on the inside serve to divide into new cells to assist in colony growth. Similar low level cellular differentiation and specification can also be seen in sponges. They also have collar cells (also called choanocytes due to their similarities to choanoflaggellates) and amoeboid cells arranged in a gelatinous matrix. Unlike choanoflagellate Salpingoeca rosetta , sponges also have other cell-types that can perform different functions (see sponges ). Also, the collar cells of sponges beat within canals in the sponge body, whereas Salpingoeca rosetta ’s collar cells reside on the inside and it lacks internal canals. Despite these minor differences, there is strong evidence that Proterospongia and Metazoa are highly related. [ 15 ]
These choanoflagellates are able to attach to one another via the pairing of collar microvilli. [ 16 ]
These choanoflagellates are capable of forming colonies via fine intercellular bridges that allow the individual cells to attach. These bridges resemble ring canals that link developing spermatogonia or oogonia in animals. [ 16 ]
Sponges are some of Earth’s oldest and most ubiquitous animals. The appearance of sponge spicule fossils date back to the Precambrian Era around 580 million years ago. [ 17 ] An assemblage of these fossils were found in the Doushanto formation in Southern China. Some circular impressions from the Ediacaran Hills in Southern Australia are also reported to be sponges. They are one of the only lineages of metazoans from this era that continue to survive, and remain relatively unchanged. [ 17 ] [ 18 ] Sponges are such successful organisms due to their simple, yet effective morphology . They do not possess mouths or any digestive, nervous or circulatory systems. Instead they are filter feeders , which means that they obtain food through nutrients in the water. [ 19 ] They have pores, called ostia , that water travels through to a chamber called the spongocoel , and exits through a chamber called the osculum . [ 19 ] Through this water filtration system, they obtain nutrients that are needed for their survival. Specifically, they intracellularly digest bacteria, micro-algae or colloids. [ 20 ]
Sponge skeletons consist of either spongin or calcareous and siliceous spicules with some collagen molecules interspersed. [ 21 ] The collagen holds the sponge cells together. Different lineages of sponges are distinguished based on the composition of their skeletons. The three main classes of sponges are Demospongiae , Hexactinellid , and Calcareous .
Demonsponges are the most well-known type of sponge since they are used by humans. They are distinguished by a siliceous skeleton of two and four rayed spicules and contain the protein spongin.
Hexactinellid are also called glass sponges, and are distinguished by a six-rayed glass skeleton. These sponges are also capable of carrying out action potentials.
Calcareous sponges are characterized by a calcium carbonate skeleton and comprise less than 5% of sponges. [ 21 ]
Sponges have around 6 different types of cells that can perform different functions. [ 21 ] Sponges are a good model for studying the origin of multicellularity because the cells are capable of communicating with one another and re-aggregating. In an experiment conducted by Henry Van Peters Wilson in 1910, it was found that cells from dissociated sponges could send out signals and recognize each other to form a new individual. [ 22 ] This suggests that the cells that compose sponges are capable of independent living, however once multicellularity was possible then aggregating together to form one organism was a more efficient way of living.
The most notable cell types of sponges are the goblet-shaped cells called choanocytes , so named for their similarity to choanoflagellates. [ 21 ] The similarities between these two cells types makes scientists believe that choanoflagellates are the sister taxa to metazoa. The flagella of these cells are what drive the water movement through the sponge body. [ 23 ] The cell body of choanocytes is what is responsible for nutrient absorption. In some species these cells can develop into gametes . [ 21 ]
The Pinacocytes are the cells on the exterior of the sponge that line the cell body. They are tightly packed together and very thin. [ 21 ]
The mesenchyme lines the region between the pinacocytes and the choanocytes. They contain a matrix composed of proteins and spicules. [ 21 ]
Archaeocytes are special types of cells, in that they can transform into all of the other cell types. They will do what is needed in the sponge body, such as ingest and digest food, transport nutrients to other cells in the sponge body. These cells are also capable of developing into gametes in some sponge species. [ 21 ]
The sclerocytes are responsible for the secretion of spicules. In species of sponges that use spongin instead of calcaerous and silicaceous spicules, the sclerocytes are replaced by spongocytes, which secrete spongin skeletal fibres. [ 21 ]
The myocytes and porocytes are responsible for contraction of the sponge. These contractions are analogous to muscle contractions in other organisms, since sponges do not have muscles. They are responsible for regulating the water flow through the sponge. [ 21 ]
The formation of multicellularity was a pivotal point in the evolution of life on Earth. Shortly after multicellularity arose, there was an immense increase in the diversity of living organisms at the beginning of the Cambrian Era, called the Cambrian Explosion . Multicellularity is believed to have evolved multiple times on Earth because it was a beneficial life strategy for organisms. [ 24 ] For multicellularity to occur, cells need to be capable of self-replication, cell-cell adhesion and cell-cell communication. There also must have been available oxygen and selective pressures in the environment.
Work by Fairclough, Dayel and King suggests that S. Rosetta can exist in either single-cellular form or in colonies of 4-50 cells, which arrange themselves in tight knit packs of spheres. [ 16 ] This was established by performing an experiment involving the introduction of prey bacterium Algoriphagus species to a sample of uni-celled S. Rosetta organism and monitored the activity for 12 hours. Results of this study demonstrated that cell colonies were formed through cell-division of the initial solitary S. Rosetta cell rather than by cell aggregation. Further studies to support the theory of cell-proliferation were done by introducing then removing the drug aphidicolin which serves to block cell-division. When the drug was introduced, cell division stopped and colony formation resulted through cell-cell aggregation. When the drug was removed, cell-division dominated once again. [ 16 ]
By looking at the genome of the Choanoflagellate , "Monosiga brevicollis", scientists have inferred that choanoflagellates play a key role in the development of multicellularity. [ 13 ] Nicole King has done work looking at the genome of Monisiga brevicollis , and has found key protein domains that are shared between metazoans and choanoflagellates. These domains play a role in cell signalling and adhesion processes in metazoans. The finding that choanoflagellates also have these genes is an incredible discovery because it was previously thought that only metazoans had genes responsible for cell-cell communication and aggregation. This suggests that these domains play a key role in the origins of multicellularity since it ties a unicellular organism (choanoflagellates) to multicellular organisms (metazoans). It shows that the components required for multicellularity were present in the common ancestor between metazoans and choanoflagellates. [ 13 ]
Neither sponges nor the placozoan Trichoplax adhaerens appear to be equipped with neuron synapses , however they both possess several factors related to the same synaptic function. [ 25 ] Therefore, it is likely that central features involved in synaptic transmission arose early in metazoan evolution, most likely around the time that much of the life on Earth was transitioning to multicellularity. It was found that the Munc18 / syntaxin 1 complex could be an important component for the production of the SNARE protein. The secretion of SNARE protein from synaptic vesicles is believed to be critical for neuronal communication . The Munc18/syntaxin 1 complex found in M. brevicollis is both structurally and functionally similar to the metazoan complex. This suggests that it constitutes an important step in the reaction pathway toward SNARE assembly. It is believed that the common ancestor of choanoflagellates and metazoans used this primordial secretion machinery as a precursor to synaptic communication. This mechanism would eventually be used for cell-cell communication in animals. [ 25 ]
Despite the fact that prokaryotic cells contained the building blocks required for multicellularity to arise, this transition did not occur for around 1500 million years after the origins of the first eukaryotic cell. [ 12 ] Scientists have proposed two major theories for the reason that multicellularity arose so late after the appearance of life on Earth.
This theory postulates that multicellularity arose as a means for prey to escape predation. Larger prey are less likely to be preyed upon, and larger predators are more likely to catch prey. Therefore it is likely that multicellularity arose when the first predators evolved. By assembling as a larger, multicelled organism, prey could escape the attempts of a predator. [ 12 ] Therefore multicellularity was selectively favoured over unicellularity. This can be seen in a simple experiment conducted by Boraas et al. (1998). [ 26 ] When a predatory protist , Ochromonas valencia , was introduced to a prey population of Chlorella vulgaris , it was seen that within less than 100 generations of the prey species a multicellular growth form of the alga became dominant. This is interesting because before the predator was introduced, the population of Chlorella vulgaris retained its unicellular growth form for thousands of generations. It is likely that it would have remained unicellular indefinitely if the selective pressure that was induced by the predators had not been introduced. After multiple generations with the predator, the algal species retained a growth form of 8-10 cells, which was large enough to avoid the predator, but small enough that each cell still had access to nutrients. [ 26 ] This predator-prey relationship provides a likely reason for why it was beneficial for organisms to be multicellular.
Despite the fact that organisms had the potential to become multicellular it is likely that it was not actually possible until the late Neoproterozoic . This is because multicellularity requires oxygen , and before the late Neoproterozoic there was very limited oxygen availability. [ 12 ] After the melting of the “ Snowball Earth ” during the mid Neoproterozoic , nutrients that were trapped in the ice flooded the oceans. [ 8 ] Surviving bacteria flourished due to the increased nutrient levels. Among these microbes were cyanobacteria and other oxygen producing bacteria , which led to the massive rise in oxygen levels. The increased oxygen availability allowed it to be used by cells in order to manufacture collagen. Collagen is the key component for cell aggregation, It is a rope-like molecule that “ties” cells together. Oxygen is required for collagen synthesis because ascorbic acid ( Vitamin C ) is essential for this process to occur. [ 6 ] A key component in the ascorbic acid molecule is oxygen (chemical formula C 6 H 8 O 6 ). [ 27 ] Therefore, it is evident that the rise in oxygen is a crucial step to the rise of multicellularity since it is essential for the synthesis of collagen. [ 8 ]
Collagen is the most abundant protein in mammals and is an essential molecule in the formation of bones, skin and other connective tissue. Different types of collagen have been found in all multicellular organisms, including sponges.
It has been found that sponges do have a gene sequence coding for collagen type IV which is a diagnostic feature of the basal lamina . [ 28 ]
It has also been found that 29 types of collagen have been found to exist in humans. This vast group can further be divided into several families according to their primary structures and supramolecular organization. Among the many types of collagens, only the fibrillar and the basement membrane (type IV) collagens have been found in the sponges and cnidarians, which are the two earliest branching metazoan lineages. Studies have focused on the origin of fibrillar collagen molecules. In Sponges, there exist three clades of fibrillar molecules, A, B and C. It is proposed that only the B clade fibrillar collagens preserved their characteristic modular structure from sponge to human. [ 29 ]
In mammals, the fibrillar collagens involved in the formation of cross-striated fibrils are types I–III, V, and XI. Type II and type XI collagens compose the fibrils present in cartilage . These can be distinguished from collagens located in non-cartilaginous tissues, which include type I, III, and V collagens. [ 29 ]
Additional research on sponge proteins found that of 42 sponge proteins that were analysed, all of them had homologous proteins that are found in humans. An identity score of 53% was given to the similarity among sponge and human proteins, compared to a score of 42% when the same sequence was compared to that of C. elegans . [ 30 ] | https://en.wikipedia.org/wiki/Precambrian_body_plans |
" Precambrian rabbits " or " fossil rabbits in the Precambrian " are reported to have been among responses given by the biologist J. B. S. Haldane when asked what evidence could destroy his confidence in the theory of evolution and the field of study. The answers became popular imagery in debates about evolution and the scientific field of evolutionary biology in the 1990s. Many of Haldane's statements about his scientific research were popularized in his lifetime.
Some accounts use this response to rebut claims that the theory of evolution is not falsifiable by any empirical evidence . This follows an assertion by Karl Popper , a philosopher of science who proposed that falsifiability is an essential feature of a scientific theory. Popper also expressed doubts about the scientific status of evolutionary theory, although he later concluded that the field of study was genuinely scientific. [ 1 ]
Rabbits are mammals . From the perspective of the philosophy of science , it is doubtful whether the genuine discovery of mammalian fossils in Precambrian rocks would overthrow the theory of evolution instantly, though if authentic, such a discovery would indicate serious errors in modern understanding about the evolutionary process. Mammals are a class of animals whose emergence in the geologic timescale is dated to much later than any found in Precambrian strata. Geological records indicate that although the first true mammals appeared in the Triassic period, modern mammalian orders appeared in the Palaeocene and Eocene epochs of the Palaeogene period. Hundreds of millions of years separate this period from the Precambrian.
Several authors have written that J. B. S. Haldane (1892–1964) said that the discovery of a fossil rabbit in Precambrian rocks would be enough to destroy his belief in evolution . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] However these references date from the 1990s or later. In 1996 Michael J. Benton cited the 1993 edition of Mark Ridley 's book Evolution , [ 8 ] Evolutionary biologist Richard Dawkins wrote in 2005 that Haldane was responding to a challenge by a " Popperian zealot ". [ 7 ] In 2004 Richa Arora wrote that the story was told by John Maynard Smith (1920–2004) in a television programme. [ 9 ] John Maynard Smith attributed the phrase to Haldane in a conversation with Paul Harvey in the early 1970s.
The philosopher Karl Popper held that any scientific proposition must be falsifiable, in other words it must at least be possible to imagine some reproducible experiment or observation whose outcome would disprove the hypothesis. [ 10 ] Initially he thought that Charles Darwin 's theory of natural selection (often summarized as "the survival of the fittest" [ 11 ] ) was untestable in this sense, and therefore "almost tautological ." [ 8 ] Popper later changed his view, concluding that the theory of natural selection is falsifiable and that Darwin's own example of the peacock 's tail had disproved one extreme variation of it, that all evolution is driven by natural selection. [ 12 ] Although in 1978 Popper wrote that his earlier objection had been specifically to the theory of natural selection, in lectures and articles from 1949 to 1974 he had stated that " Darwinism " or "Darwin's theory of evolution" was a "metaphysical research programme" because it was not falsifiable. [ 13 ] In fact he continued to express dissatisfaction with contemporary statements of the theory of evolution which focused on population genetics , the study of the relative frequencies of alleles (different forms of the same gene ). Unfortunately some of the adjustments he proposed resembled Lamarckianism or saltationism , evolutionary theories that were and still are considered obsolete, and evolutionary biologists therefore disregarded his criticisms. [ 10 ] In 1981 Popper complained that he had been misinterpreted as saying that "historical sciences" such as paleontology or the history of evolution of life on Earth were not genuine sciences, when in fact he believed they could make falsifiable predictions. [ 10 ] [ 14 ]
Further confusion arose in 1980–1981, when there was a long debate in the pages of Nature about the scientific status of the theory of evolution. [ 15 ] [ 16 ] Specifically, the argument was on the factors influencing and nature of the unit of selection in the genome, with one side positing natural selection, [ 17 ] [ 18 ] and the other, neutral mutation . [ 19 ] [ 20 ] Neither of the parties seriously doubted that the theory was both scientific and, according to current scientific knowledge, true. Some participants objected to statements that appeared to present the theory of evolution as an absolute dogma, however, rather than as a hypothesis that so far has performed very well, and both sides quoted Popper in support of their positions. Evolution critics such as Phillip E. Johnson took this as an opportunity to declare that the theory of evolution was unscientific. [ 10 ] [ 16 ]
Richard Dawkins said that the discovery of fossil mammals in Precambrian rocks would "completely blow evolution out of the water." [ 21 ] Philosopher Peter Godfrey-Smith doubted that a single set of anachronistic fossils, however, even rabbits in the Precambrian, would disprove the theory of evolution outright. The first question raised by the assertion of such a discovery would be whether the alleged "Precambrian rabbits" really were fossilized rabbits. Alternative interpretations might include incorrect identification of the "fossils", incorrect dating of the rocks, and a hoax such as the Piltdown Man was shown to be. Even if the "Precambrian rabbits" turned out to be genuine, they would not instantly refute the theory of evolution, because that theory is a large package of ideas, including: that life on Earth has evolved over billions of years; that this evolution is driven by certain mechanisms; and that these mechanisms have produced a specific "family tree" that defines the relationships among species and the order in which they appeared. Hence, "Precambrian rabbits" would prove that there were one or more serious errors somewhere in this package, and the next task would be to identify those errors. [ 3 ]
Benton pointed out that, in the short term, scientists often have to accept the existence of competing hypotheses, each of which explains large parts—but not all—of the observed relevant data. [ 8 ]
Genuine fossils of earliest rabbits are from the Eocene Epoch , about 56 million years to 33.9 million years ago. Members of the genus Gomphos are established to be the phylogenetic root of lagomorph rabbits and hares. [ 22 ] To date, the oldest Gomphos is G. elkema discovered in 2008 from Gujarat , India . The fossil is dated to 53 million years old. [ 23 ] [ 24 ] | https://en.wikipedia.org/wiki/Precambrian_rabbit |
Precast concrete is a construction product produced by casting concrete in a reusable mold or "form" which is then cured in a controlled environment, transported to the construction site and maneuvered into place; examples include precast beams , and wall panels, floors, roofs, and piles. In contrast, cast-in-place concrete is poured into site-specific forms and cured on site. [ 1 ]
Recently lightweight expanded polystyrene foam is being used as the cores of precast wall panels, saving weight and increasing thermal insulation .
Precast stone is distinguished from precast concrete by the finer aggregate used in the mixture, so the result approaches the natural product.
Precast concrete is employed in both interior and exterior applications, from highway, bridge, and high-rise projects to parking structures, K-12 schools, warehouses, mixed-use, and industrial building construction. By producing precast concrete in a controlled environment (typically referred to as a precast plant), the precast concrete is afforded the opportunity to properly cure and be closely monitored by plant employees. Using a precast concrete system offers many potential advantages over onsite casting. Precast concrete production can be performed on ground level, which maximizes safety in its casting. There is greater control over material quality and workmanship in a precast plant compared to a construction site. The forms used in a precast plant can be reused hundreds to thousands of times before they have to be replaced, often making it cheaper than onsite casting in terms of cost per unit of formwork. [ 2 ]
Precast concrete forming systems for architectural applications differ in size, function, and cost. Precast architectural panels are also used to clad all or part of a building facade or erect free-standing walls for landscaping, soundproofing , and security. In appropriate instances precast products – such as beams for bridges, highways, and parking structure decks – can be prestressed structural elements. Stormwater drainage, water and sewage pipes, and tunnels also make use of precast concrete units.
Precast concrete molds can be made of timber, steel, plastic, rubber, fiberglass, or other synthetic materials, with each giving a unique finish. [ 3 ] In addition, many surface finishes for the four precast wall panel types – sandwich, plastered sandwich, inner layer and cladding panels – are available, including those creating the looks of horizontal boards and ashlar stone . Color may be added to the concrete mix, and the proportions and size aggregate also affect the appearance and texture of finished concrete surfaces.
Ancient Roman builders made use of concrete and soon poured the material into moulds to build their complex network of aqueducts , culverts , and tunnels. Modern uses for pre-cast technology include a variety of architectural and structural applications – including individual parts, or even entire building systems.
In the modern world, precast panelled buildings were pioneered in Liverpool , England , in 1905. [ 4 ] The process was invented by city engineer John Alexander Brodie . The tram stables at Walton in Liverpool followed in 1906. The idea was not taken up extensively in Britain. However, it was adopted all over the world, particularly in Central and Eastern Europe [ 5 ] as well as in Million Programme in Scandinavia.
In the US, precast concrete has evolved as two sub-industries, each represented by a major association. The precast concrete structures industry, represented primarily by of the Precast/Prestressed Concrete Institute (PCI), focuses on prestressed concrete elements and on other precast concrete elements used in above-ground structures such as buildings, parking structures, and bridges, while the precast concrete products industry produces utility, underground, and other non-prestressed products, and is represented primarily by the National Precast Concrete Association (NPCA).
In Australia , The New South Wales Government Railways made extensive use of precast concrete construction for its stations and similar buildings. Between 1917 and 1932, it erected 145 such buildings. [ 6 ]
Beyond cladding panels and structural elements, entire buildings can be assembled from precast concrete. Precast assembly enables fast completion of commercial shops and offices with minimal labor. For example, the Jim Bridger Building in Williston, North Dakota, was precast in Minnesota with air, electrical, water, and fiber utilities preinstalled into the building panels. The panels were transported over 800 miles to the Bakken oilfields, and the commercial building was assembled by three workers in minimal time. The building houses over 40,000 square feet of shops and offices. Virtually the entire building was fabricated in Minnesota.
Reinforcing concrete with steel improves strength and durability. On its own, concrete has good compressive strength, but lacks tensile and shear strength and can be subject to cracking when bearing loads for long periods of time. Steel offers high tensile and shear strength to make up for what concrete lacks. Steel behaves similarly to concrete in changing environments, which means it will shrink and expand with concrete, helping avoid cracking.
Rebar is the most common form of concrete reinforcement. It is typically made from steel, manufactured with ribbing to bond with concrete as it cures. Rebar is versatile enough to be bent or assembled to support the shape of any concrete structure. Carbon steel is the most common rebar material. However, stainless steel, galvanized steel, and epoxy coatings can prevent corrosion. [ 7 ]
The following is a sampling of the numerous products that utilize precast/prestressed concrete. While this is not a complete list, the majority of precast/prestressed products typically fall under one or
Since precast concrete products can withstand the most extreme weather conditions and will hold up for many decades of constant usage they have wide applications in agriculture. These include bunker silos, cattle feed bunks, cattle grid , agricultural fencing, H-bunks, J-bunks, livestock slats, livestock watering trough, feed troughs, concrete panels, slurry channels, and more. Prestressed concrete panels are widely used in the UK for a variety of applications including agricultural buildings, grain stores, silage clamps, slurry stores, livestock walling and general retaining walls. Panels can be used horizontally and placed either inside the webbings of RSJs ( I-beam ) or in front of them. Alternatively panels can be cast into a concrete foundation and used as a cantilever retaining wall.
Precast concrete building components and site amenities are used architecturally as fireplace mantels, cladding, trim products, accessories and curtain walls. Structural applications of precast concrete include foundations, beams, floors, walls and other structural components. It is essential that each structural component be designed and tested to withstand both the tensile and compressive loads that the member will be subjected to over its lifespan. Expanded polystyrene cores are now in precast concrete panels for structural use, making them lighter and serving as thermal insulation.
Multi-storey car parks are commonly constructed using precast concrete. The constructions involve putting together precast parking parts which are multi-storey structural wall panels, interior and exterior columns, structural floors, girders, wall panels, stairs, and slabs. These parts can be large; for example, double-tee structural floor modules need to be lifted into place with the help of precast concrete lifting anchor systems . [ 8 ]
Precast concrete is employed in a wide range of engineered earth retaining systems. Products include commercial and residential retaining walls , sea walls , mechanically stabilized earth panels, and other modular block systems. [ 9 ]
Sanitary and stormwater management products are structures designed for underground installation that have been specifically engineered for the treatment and removal of pollutants from sanitary and stormwater run-off. These precast concrete products include stormwater detention vaults , catch basins , and manholes . [ 10 ]
For communications, electrical, gas or steam systems, precast concrete utility structures protect the vital connections and controls for utility distribution. Precast concrete is nontoxic and environmentally safe. Products include: hand holes, hollow-core products, light pole bases, meter boxes, panel vaults, pull boxes, telecommunications structures, transformer pads, transformer vaults, trenches, utility buildings, utility vaults , utility poles, controlled environment vaults (CEVs), and other utility structures.
Precast water and wastewater products hold or contain water, oil or other liquids for the purpose of further processing into non-contaminating liquids and soil products. Products include: aeration systems , distribution boxes, dosing tanks, dry wells , grease interceptors , leaching pits, sand-oil/oil-water interceptors, septic tanks , water/sewage storage tanks, wet wells, fire cisterns, and other water and wastewater products.
Precast concrete transportation products are used in the construction, safety, and site protection of roads, airports, and railroad transportation systems. Products include: box culverts , 3-sided culverts, bridge systems, railroad crossings, railroad ties, sound walls /barriers, Jersey barriers , tunnel segments, concrete barriers, TVCBs, central reservation barriers, bollards, and other transportation products. Precast concrete can also be used to make underpasses, surface crossings, and pedestrian subways. Precast concrete is also used for the roll ways of some rubber-tyred metros .
Modular paving is available in a rainbow of colors, shapes, sizes, and textures. These versatile precast concrete pieces can be designed to mimic brick, stone or wood.
Underground vaults or mausoleums require watertight structures that withstand natural forces for extended periods of time.
Storage of hazardous material, whether short-term or long-term, is an increasingly important environmental issue, calling for containers that not only seal in the materials, but are strong enough to stand up to natural disasters or terrorist attacks.
Seawalls , floating docks, underwater infrastructure, decking, railings, and a host of amenities are among the uses of precast along the waterfront. When designed with heavy weight in mind, precast products counteract the buoyant forces of water significantly better than most materials.
Prestressing is a technique of introducing stresses into a structural member during fabrication and/or construction to improve its strength and performance. This technique is often employed in concrete beams, columns, spandrels, single and double tees, wall panels, segmental bridge units, bulb-tee girders, I-beam girders, and others. Many projects find that prestressed concrete provides the lowest overall cost, considering production and lifetime maintenance. [ 9 ] [ 11 ]
The precast concrete double-wall panel has been in use in Europe for decades. The original double-wall design consisted of two wythes of reinforced concrete separated by an interior void, held together with embedded steel trusses. With recent concerns about energy use, it is recognized that using steel trusses creates a "thermal bridge" that degrades thermal performance. Also, since steel does not have the same thermal expansion coefficient as concrete, as the wall heats and cools any steel that is not embedded in the concrete can create thermal stresses that cause cracking and spalling.
To achieve better thermal performance, insulation was added in the void, and in many applications today the steel trusses have been replaced by composite (fibreglass, plastic, etc.) connection systems. These systems, which are specially developed for this purpose, also eliminate the differential thermal expansion problem. [ citation needed ] The best thermal performance is achieved when the insulation is continuous throughout the wall section, i.e., the wythes are thermally separated completely to the ends of the panel. Using continuous insulation and modern composite connection systems, R-values up to R-28.2 can be achieved.
The overall thickness of sandwich wall panels in commercial applications is typically 8 inches, but their designs are often customized to the application. In a typical 8-inch wall panel the concrete wythes are each 2-3/8 inches thick), sandwiching 3-1/4 inches of high R-value insulating foam. The interior and exterior wythes of concrete are held together (through the insulation) with some form of connecting system that is able to provide the needed structural integrity. Sandwich wall panels can be fabricated to the length and width desired, within practical limits dictated by the fabrication system, the stresses of lifting and handling, and shipping constraints. Panels of 9-foot clear height are common, but heights up to 12 feet can be found.
The fabrication process for precast concrete sandwich wall panels allows them to be produced with finished surfaces on both sides. Such finishes can be very smooth, with the surfaces painted, stained, or left natural; for interior surfaces, the finish is comparable to drywall in smoothness and can be finished using the same prime and paint procedure as is common for conventional drywall construction. If desired, the concrete can be given an architectural finish, where the concrete itself is colored and/or textured. Colors and textures can provide the appearance of brick, stone, wood, or other patterns through the use of reusable formliners , or, in the most sophisticated applications, actual brick, stone, glass, or other materials can be cast into the concrete surface. [ citation needed ]
Window and door openings are cast into the walls at the manufacturing plant as part of the fabrication process. In many applications, electrical and telecommunications conduit and boxes are cast directly into the panels in the specified locations. In some applications, utilities, plumbing and even heating components have been cast into the panels to reduce on-site construction time. The carpenters, electricians and plumbers do need to make some slight adjustments when first becoming familiar with some of the unique aspects of the wall panels. However, they still perform most of their job duties in the manner to which they are accustomed.
Precast concrete sandwich wall panels have been used on virtually every type of building, including schools, office buildings, apartment buildings, townhouses, condominiums, hotels, motels, dormitories, and single-family homes. Although typically considered part of a building's enclosure or "envelope," they can be designed to also serve as part of the building's structural system, eliminating the need for beams and columns on the building perimeter. Besides their energy efficiency and aesthetic versatility, they also provide excellent noise attenuation, outstanding durability (resistant to rot, mold, etc.), and rapid construction.
In addition to the good insulation properties, sandwich panels require fewer work phases to complete. Compared to double-walls, for example, which have to be insulated and filled with concrete on site, sandwich panels require much less labor and scaffolding. [ 12 ]
The precast concrete industry is primarily dominated by government-initiated projects for infrastructural development. However, these are also being extensively used for residential (low-rise and high-rise) and commercial constructions due to their various favourable attributes. The efficiency, durability, ease, cost effectiveness, and sustainable properties [ 13 ] of these products have brought a revolutionary shift in the time consumed in construction of any structure. The construction industry is a huge energy-consuming industry, and precast concrete products are and will continue to be more energy efficient than their counterparts. The wide range of designs, colours, and structural options that these products provide is also making them a favourable choice for consumers.
Many state and federal transportation projects in the United States require precast concrete suppliers to be certified by either the Architectural Precast Association, National Precast Concrete Association or Precast Prestressed Concrete Institute. | https://en.wikipedia.org/wiki/Precast_concrete |
In United States safety standards , precautionary statements are sentences providing information on potential hazards and proper procedures. They are used in situations from consumer product on labels and manuals to descriptions of physical activities. Various methods are used to bring focus to them, such as setting apart from normal text, graphic icons, changes in text's font and color. Texts will often clarify the types of statements and their meanings within the text. Common precautionary statements are described below.
Danger statements are a description of situations where an immediate hazard will cause death or serious injury to workers and/or the general public if not avoided. This designation is to be used only in extreme situations.
ANSI Z535.5 Definition: "Indicates a hazardous situation that, if not avoided, will result in death or serious injury. The signal word "DANGER" is to be limited to the most extreme situations. DANGER [signs] should not be used for property damage hazards unless personal injury risk appropriate to these levels is also involved." [ 1 ]
OSHA 1910.145 Definition: "Shall be used in major hazard situations where an immediate hazard presents a threat of death or serious injury to employees. Danger tags shall be used only in these situations." [ 2 ]
Warning statements are a description of a situation where a potentially hazardous condition exists that could result in the death or serious injury of workers and/or the general public if not avoided.
ANSI Z535.5 Definition: "Indicates a hazardous situation that, if not avoided, could result in death or serious injury. WARNING [signs] should not be used for property damage hazards unless personal injury risk appropriate to this level is also involved." [ 1 ]
OSHA 1910.145 Definition: "May be used to represent a hazard level between "Caution" and "Danger," instead of the required "Caution" tag, provided that they have a signal word of "Warning," an appropriate major message, and otherwise meet the general tag criteria of paragraph (f)(4) of this section." [ 2 ]
Caution statements are a description of situations where a non-immediate or potential hazard presents a lesser threat of injury that could result in minor or moderate injuries to workers and/or the general public.
ANSI Z535.5 Definition: "Indicates a hazardous situation that, if not avoided, could result in minor or moderate injury." [ 1 ]
OSHA 1910.145 Definition: "Shall be used in minor hazard situations where a non-immediate or potential hazard or unsafe practice presents a lesser threat of employee injury." [ 2 ]
Notice statements are a description of situations where a non-immediate or potential hazard presents a risk to damage of property and equipment. May be used to indicate important operational characteristics. There is no "Safety Alert" or attention symbol present in this situation.
ANSI Z535.5 Definition: "Indicates information considered important but not hazard related. The safety alert symbol (a triangle with the exclamation point) shall not be used with this signal word. For environmental/facility signs, NOTICE is typically the choice of signal word for messages relating to property damage, security, sanitation, and housekeeping rules." [ 1 ]
OSHA 1910.145 Definition: None. [ 2 ] | https://en.wikipedia.org/wiki/Precautionary_statement |
The precedence effect or law of the first wavefront is a binaural psychoacoustical effect concerning sound reflection and the perception of echoes . When two versions of the same sound presented are separated by a sufficiently short time delay (below the listener's echo threshold), listeners perceive a single auditory event; its perceived spatial location is dominated by the location of the first-arriving sound (the first wave front ). The lagging sound does also affect the perceived location; however, its effect is mostly suppressed by the first-arriving sound.
The Haas effect was described in 1949 by Helmut Haas in his Ph.D. thesis. [ 1 ] The term "Haas effect" is often loosely taken to include the precedence effect which underlies it.
Joseph Henry published "On The Limit of Perceptibility of a Direct and Reflected Sound" in 1851. [ 2 ]
The "law of the first wavefront" was described and named in 1948 by Lothar Cremer . [ 3 ]
The "precedence effect" was described and named in 1949 by Wallach et al. [ 4 ] They showed that when two identical sounds are presented in close succession they will be heard as a single fused sound. In their experiments, fusion occurred when the lag between the two sounds was in the range 1 to 5 ms for clicks, and up to 40 ms for more complex sounds such as speech or piano music. When the lag was longer, the second sound was heard as an echo.
Additionally, Wallach et al. demonstrated that when successive sounds coming from sources at different locations were heard as fused, the apparent location of the perceived sound was dominated by the location of the sound that reached the ears first (i.e. the first-arriving wavefront). The second-arriving sound had only a very small (albeit measurable) effect on the perceived location of the fused sound. They designated this phenomenon as the precedence effect , and noted that it explains why sound localization is possible in the typical situation where sounds reverberate from walls, furniture and the like, thus providing multiple, successive stimuli. They also noted that the precedence effect is an important factor in the perception of stereophonic sound.
Wallach et al. did not systematically vary the intensities of the two sounds, although they cited research by Langmuir et al. [ 5 ] which suggested that if the second-arriving sound is at least 15 dB louder than the first, the precedence effect breaks down.
The "Haas effect" derives from a 1951 paper by Helmut Haas. [ 6 ] In 1951 Haas examined how the perception of speech is affected in the presence of a single, coherent sound reflection. [ 7 ] To create anechoic conditions, the experiment was carried out on the rooftop of a freestanding building. Another test was carried out in a room with a reverberation time of 1.6 s. The test signal (recorded speech) was emitted from two similar loudspeakers at locations 45° to the left and to the right in 3 m distance to the listener.
Haas found that humans localize sound sources in the direction of the first arriving sound despite the presence of a single reflection from a different direction, and that in such cases only a single auditory event is perceived. A reflection arriving later than 1 ms after the direct sound increases the perceived level and spaciousness (more precisely the perceived width of the sound source). A single reflection arriving at a delay of between 5 and 30 ms can be up to 10 dB louder than the direct sound without being perceived as a secondary auditory event (i.e. it does not sound like an echo). This time span varies with the reflection level. If the direct sound is coming from the same direction the listener is facing, the reflection's direction has no significant effect on the results. If the reflection's higher frequencies are attenuated, echo suppression continues to occur even if the delay between the sounds is somewhat longer. Increased room reverberation time also expands the time span available for echo suppression. [ 8 ]
The precedence effect occurs if the subsequent wave fronts arrive between 2 ms and about 50 ms later than the first wave front. This range is signal dependent. For speech, the precedence effect disappears for delays above 50 ms, but for music, the precedence effect can still occur with delays approaching 100 ms. [ 9 ]
In two-click lead–lag experiments, localization effects include aspects of summing localization , localization dominance , and lag discrimination suppression . The last two are generally considered to be aspects of the precedence effect: [ 10 ]
For time delays above 50 ms (for speech) or some 100 ms (for music) the delayed sound is perceived as an echo of the first-arriving sound, and each sound direction is localized separately and correctly. The time delay for perceiving echoes depends on the signal characteristics. For signals with impulse characteristics, echoes are perceived for delays above 50 ms. For signals with a nearly constant amplitude, the threshold before perceiving an echo can be enhanced up to time differences of 1 to 2 seconds.
A special appearance of the precedence effect is the Haas effect. Haas showed that the precedence effect appears even if the level of the delayed sound is up to 10 dB higher than the level of the first wave front. In this case, the precedence effect only works for delays between 10 and 30 ms.
The precedence effect is important for hearing in enclosed spaces. With the help of this effect, it remains possible to determine the direction of a sound source (e.g. the direction of a speaker) even in the presence of wall reflections.
Haas's findings can be applied to sound reinforcement systems and public address systems. The signal for loudspeakers placed at distant locations from a stage may be delayed electronically by an amount equal to the time sound takes to travel through the air from the stage to the distant location, plus about 10 to 20 milliseconds, and played at a level up to 10 dB louder than the sound reaching this location directly from the stage. The first arrival of sound from the source on stage determines perceived localization whereas the slightly later sound from delayed loudspeakers simply increases the perceived sound level without negatively affecting localization. In this configuration, the listener will localize all sound from the direction of the direct sound, but they will benefit from the higher sound level, which has been enhanced by the loudspeakers. [ 11 ]
The precedence effect can be employed to increase the perception of ambience during the playback of stereo recordings. [ 12 ] If two more speakers are placed to the left and right of the listener (in addition to the two main speakers), and are fed with the same program material but delayed by 10 to 20 milliseconds, the random-phase ambience components of the sound will become sufficiently decorrelated that they cannot be localized. This effectively extracts the recording's existing ambience, while leaving its foreground "direct" sounds still appearing to come from the front. [ 13 ] [ 14 ]
The effect was taken into account and exploited in the psychoacoustics of the Fosgate Tate 101A SQ decoder, developed by Jim Fosgate in consultation with Peter Scheiber and Martin Willcocks , to produce much better spatiality and directionality in matrix decoding of 4-2-4 ( SQ quadraphonic ) audio.
Many older LEDE ("live end, dead end") control room designs featured so-called "Haas kickers" – reflective panels placed at the rear to create specular reflections which were thought to provide a wider stereo listening area or raise intelligibility. [ 15 ] However, what is beneficial for one type of sound is detrimental to others, so Haas kickers, like compression ceilings , are no longer commonly found in control rooms. [ 16 ] | https://en.wikipedia.org/wiki/Precedence_effect |
Precedent is a judicial decision that serves as an authority for courts when deciding subsequent identical or similar cases. [ 1 ] [ 2 ] [ 3 ] Fundamental to common law legal systems, precedent operates under the principle of stare decisis ("to stand by things decided"), where past judicial decisions serve as case law to guide future rulings, thus promoting consistency and predictability. [ 2 ] [ 4 ] [ 5 ]
Precedent is a defining feature that sets common law systems apart from civil law systems. In common law, precedent can either be something courts must follow (binding) or something they can consider but do not have to follow (persuasive). [ 6 ] [ 7 ] Civil law systems, in contrast, are characterized by comprehensive codes and detailed statutes , with no emphasis on precedent, and where judges primarily focus on fact-finding and applying codified law. [ 8 ]
Courts in common law systems rely heavily on case law , which refers to the collection of precedents and legal principles established by previous judicial decisions on specific issues or topics. [ 9 ] The development of case law depends on the systematic publication and indexing of these decisions in law reports , making them accessible to lawyers, courts, and the general public. [ 10 ]
Generally speaking, a legal precedent may be:
Stare decisis ( / ˈ s t ɛər r i d ɪ ˈ s aɪ s ɪ s , ˈ s t ɑː r eɪ / ) is a judicial doctrine under which courts follow the principles, rules, or standards established in their prior decisions (or those of higher courts) when deciding cases involving the same or closely related issues. [ 4 ] [ 11 ] The term originates from the Latin phrase stare decisis et non quieta movere, meaning to "stand by the thing decided and do not disturb the calm." [ 12 ]
The doctrine operates both horizontally and vertically. Vertical stare decisis binds lower courts to strictly follow the decisions of higher courts within the same jurisdiction. [ 13 ] The Seventh Circuit Court of Appeals applying a precedent set by the U.S. Supreme Court is an example of vertical stare decisis . [ 13 ] Horizontal stare decisis refers the principle that a court adheres to its own previous rulings. [ 13 ]
In the modern era, the U.S. Supreme Court adheres to its prior decisions unless there is a special justification to overrule precedent. [ 14 ] By taking this approach, the Court has rejected a strict view of stare decisis that would require it to uphold past rulings regardless of their merits or the practical consequences of maintaining or overturning them. [ 14 ]
Ratio decidendi ("the reason for the decision") refers to the key factual element or line of reasoning in a case that forms the basis for the court's final judgment. [ 15 ] It forms the basis for a court decision and creates binding precedent. [ 15 ] This distinguishes it from other parts of a judicial opinion, such as obiter dicta (non-binding observations or comments).
In contrast, obiter dicta ("something said in passing") refers to comments, suggestions, or observations made by a judge in an opinion that are not necessary to resolve the case at hand. [ 16 ] [ 17 ] While not legally binding on other courts, such statements may be cited as persuasive authority in subsequent litigation. [ 16 ]
In federal systems the division between federal and state law may result in complex interactions. In the United States, state courts are not considered inferior to federal courts but rather constitute a parallel court system.
In practice, however, judges in one system will almost always choose to follow relevant case law in the other system to prevent divergent results and to minimize forum shopping .
Binding precedent, based on the legal principle of stare decisis, requires lower courts to follow the decisions of appellate courts in the same jurisdiction. [ 23 ] [ 6 ] In other words, when an appellate court resolves a question of law , its determination, or " holding ," serves as precedent that lower courts are bound to apply in cases involving similar facts or legal issues. [ 6 ]
For example, in the United States, decisions of the U.S. Supreme Court , as the nation's highest court, are binding on all other courts nationwide. [ 6 ]
Persuasive precedent refers to legal decisions that a court may consider but is not obligated to follow when deciding a case, as they are not binding. [ 7 ] Examples include decisions from courts in neighboring jurisdictions and dicta from rulings by higher courts. [ 7 ] In Australia, decisions of superior overseas courts, such as those from the United Kingdom, serve as persuasive precedent. [ 24 ]
Although not binding precedent, a court may choose to rely on persuasive precedent if the reasoning is compelling. [ 25 ] Courts often turn to decisions from other jurisdictions for guidance, particularly when interpreting unclear laws or addressing "cases of first impression"—situations in which no prior binding authority exists and the court must determine the applicable law for the first time. [ 25 ] [ 26 ] [ 27 ]
Nonpublication of opinions, or unpublished opinions, are those decisions of courts that are not available for citation as precedent because the judges making the opinion deem the cases as having less precedential value. Selective publication is the legal process which a judge or justices of a court decide whether a decision is to be or not published in a reporter . "Unpublished" federal appellate decisions are published in the Federal Appendix . Depublication is the power of a court to make a previously published order or opinion unpublished. [ 28 ]
Litigation that is settled out of court generates no written decision, thus has no precedential effect. As one practical effect, the U.S. Department of Justice settles many cases against the federal government simply to avoid creating adverse precedent. [ 29 ]
Stare decisis is not usually a doctrine used in civil law systems, because it violates the legislative positivist principle that only the legislature may make law. Instead, the civil law system relies on the doctrine of jurisprudence constante , according to which if a court has adjudicated a consistent line of cases that arrive at the same holdings using sound reasoning, then the previous decisions are highly persuasive but not controlling on issues of law. This doctrine is similar to stare decisis insofar as it dictates that a court's decision must condone a cohesive and predictable result. In theory, lower courts are generally not bound by the precedents of higher courts. In practice, the need for predictability means that lower courts generally defer to the precedent of higher courts. As a result, the precedent of courts of last resort, such as the French Cassation Court and the Council of State , is recognized as being de facto binding on lower courts.
The doctrine of jurisprudence constante also influences how court decisions are structured. In general, court decisions of common law jurisdictions give a sufficient ratio decidendi as to guide future courts. The ratio is used to justify a court decision on the basis of previous case law as well as to make it easier to use the decision as a precedent for future cases. By contrast, court decisions in some civil law jurisdictions (most prominently France ) tend to be extremely brief, mentioning only the relevant legislation and codal provisions and not going into the ratio decidendi in any great detail. This is the result of the legislative positivist view that the court is only interpreting the legislature's intent and therefore detailed exposition is unnecessary. Because of this, ratio decidendi is carried out by legal academics (doctrinal writers) who provide the explanations that in common law jurisdictions would be provided by the judges themselves. [ citation needed ]
In other civil law jurisdictions, such as the German-speaking countries, ratio decidendi tend to be much more developed than in France, and courts will frequently cite previous cases and doctrinal writers. However, some courts (such as German courts) have less emphasis on the particular facts of the case than common law courts, but have more emphasis on the discussion of various doctrinal arguments and on finding what the correct interpretation of the law is.
The mixed systems of the Nordic countries are sometimes considered a branch of the civil law, but they are sometimes counted as separate from the civil law tradition. In Sweden , for instance, case law arguably plays a more important role than in some of the continental civil law systems. The two highest courts, the Supreme Court ( Högsta domstolen ) and the Supreme Administrative Court ( Högsta förvaltningsdomstolen ), have the right to set precedent which has persuasive authority on all future application of the law. Appellate courts, be they judicial ( hovrätter ) or administrative ( kammarrätter ), may also issue decisions that act as guides for the application of the law, but these decisions are persuasive, not controlling, and may therefore be overturned by higher courts. [ citation needed ]
Some mixed systems, such as Scots law in Scotland , South African law , Laws of the Philippines , and the law of Quebec and Louisiana , do not fit into the civil vs. common law dichotomy because they mix portions of both. Such systems may have been heavily influenced by the common law tradition; however, their private law is firmly rooted in the civil law tradition. Because of their position between the two main systems of law, these types of legal systems are sometimes referred to as "mixed" systems of law. Louisiana courts, for instance, operate under both stare decisis and jurisprudence constante . In South Africa, the precedent of higher courts is absolutely or fully binding on lower courts, whereas the precedent of lower courts only has persuasive authority on higher courts; horizontally, precedent is prima facie or presumptively binding between courts. [ citation needed ]
Law professors in common law traditions play a much smaller role in developing case law than professors in civil law traditions. Because court decisions in civil law traditions are brief and not amenable to establishing precedent, much of the exposition of the law in civil law traditions is done by academics rather than by judges; this is called doctrine and may be published in treatises or in journals such as Recueil Dalloz in France. Historically, common law courts relied little on legal scholarship; thus, at the turn of the twentieth century, it was very rare to see an academic writer quoted in a legal decision (except perhaps for the academic writings of prominent judges such as Coke and Blackstone ). Today academic writers are often cited in legal argument and decisions as persuasive authority ; often, they are cited when judges are attempting to implement reasoning that other courts have not yet adopted, or when the judge believes the academic's restatement of the law is more compelling than can be found in precedent. Thus common law systems are adopting one of the approaches long common in civil law jurisdictions. [ citation needed ]
Justice Louis Brandeis, in a heavily footnoted dissent to Burnet v. Coronado Oil & Gas Co. , 285 U.S. 393 , 405–411 (1932), explained (citations and quotations omitted):
Stare decisis is not ... a universal, inexorable command. "The rule of stare decisis , though one tending to consistency and uniformity of decision, is not inflexible. Whether it shall be followed or departed from is a question entirely within the discretion of the court, which is again called upon to consider a question once decided." Stare decisis is usually the wise policy, because in most matters it is more important that the applicable rule of law be settled than that it be settled right. This is commonly true even where the error is a matter of serious concern, provided correction can be had by legislation. But in cases involving the Federal Constitution, where correction through legislative action is practically impossible, this Court has often overruled its earlier decisions. The Court bows to the lessons of experience and the force of better reasoning, recognizing that the process of trial and error, so fruitful in the physical sciences, is appropriate also in the judicial function. ... In cases involving the Federal Constitution the position of this Court is unlike that of the highest court of England, where the policy of stare decisis was formulated and is strictly applied to all classes of cases. Parliament is free to correct any judicial error; and the remedy may be promptly invoked.
The reasons why this Court should refuse to follow an earlier constitutional decision which it deems erroneous are particularly strong where the question presented is one of applying, as distinguished from what may accurately be called interpreting, the Constitution. In the cases which now come before us there is seldom any dispute as to the interpretation of any provision. The controversy is usually over the application to existing conditions of some well-recognized constitutional limitation. This is strikingly true of cases under the due process clause when the question is whether a statute is unreasonable, arbitrary or capricious; of cases under the equal protection clause when the question is whether there is any reasonable basis for the classification made by a statute; and of cases under the commerce clause when the question is whether an admitted burden laid by a statute upon interstate commerce is so substantial as to be deemed direct. ...
In his "landmark dissent" in Burnet , Brandeis "catalogued the Court's actual overruling practices in such a powerful manner that his attendant stare decisis analysis immediately assumed canonical authority." [ 30 ]
The United States Court of Appeals for the Third Circuit has stated:
A judicial precedent attaches a specific legal consequence to a detailed set of facts in an adjudged case or judicial decision, which is then considered as furnishing the rule for the determination of a subsequent case involving identical or similar material facts and arising in the same court or a lower court in the judicial hierarchy. [ 31 ]
The United States Court of Appeals for the Ninth Circuit has stated:
Stare decisis is the policy of the court to stand by precedent; the term is but an abbreviation of stare decisis et non quieta movere —"to stand by and adhere to decisions and not disturb what is settled". Consider the word "decisis". The word means, literally and legally, the decision. Under the doctrine of stare decisis a case is important only for what it decides—for the "what", not for the "why", and not for the "how". Insofar as precedent is concerned, stare decisis is important only for the decision, for the detailed legal consequence following a detailed set of facts. [ 32 ]
Lord Hodge of the UK Supreme Court quoted [ 33 ] [ 34 ] Lord Wright in 1938 saying:
[T]hat is the way of the common law , the judges preferring to go from case to case, like the ancient Mediterranean mariners, hugging the coast from point to point, and avoiding the dangers of the open sea of system or science.
Precedent viewed against passing time can serve to establish trends, thus indicating the next logical step in evolving interpretations of the law. For instance, if immigration has become more and more restricted under the law, then the next legal decision on that subject may serve to restrict it further still. The existence of submerged precedent (reasoned opinions not made available through conventional legal research sources) has been identified as a potentially distorting force in the evolution of law. [ 35 ]
Scholars have recently attempted to apply network theory to precedent in order to establish which precedent is most important or authoritative, and how the court's interpretations and priorities have changed over time. [ 36 ]
Early English common law did not have or require the stare decisis doctrine for a range of legal and technological reasons:
These features changed over time, opening the door to the doctrine of stare decisis :
By the end of the eighteenth century, the common law courts had absorbed most of the business of their nonroyal competitors, although there was still internal competition among the different common law courts themselves. During the nineteenth century, legal reform movements in both England and the United States brought this to an end as well by merging the various common law courts into a unified system of courts with a formal hierarchical structure. This and the advent of reliable private case reporters made adherence to the doctrine of stare decisis practical and the practice soon evolved of holding judges to be bound by the decisions of courts of superior or equal status in their jurisdiction. [ 37 ]
Over time courts in the United States and especially its Supreme Court developed a large body of judicial decisions which are called "precedents". These "[r]ules and principles established in prior cases inform the Court's future decisions." [ 38 ] The adherence to rules and principles created in past cases as a foundation for future decisions by the courts is called stare decisis . The United States Supreme Court considers stare decisis not only as an important doctrine , but also "the means by which we ensure that the law will not merely change erratically, but will develop in a principled and intelligible fashion." [ 39 ] Stare decisis aims to bolster the legitimacy of the judicial process and foster the rule of law. It does so by strengthening stability, certainty, predictability, consistency and uniformity in the application of the law to cases and litigants. [ 38 ] By adhering to stare decisis the Supreme Court attempts to preserve its role "as a careful, unbiased, and predictable decisionmaker that decides cases according to the law rather than the Justices' individual policy preferences." [ 38 ] In Vasquez v. Hillery (1986) the Supreme Court stated succinctly that stare decisis "contributes to the integrity of our constitutional system of government, both in appearance and in fact" by maintaining the notion "that bedrock principles are founded in the law, rather than in the proclivities of individuals." [ 39 ]
Stare decisis reduces the number and scope of legal questions that the court must resolve in litigation. It is therefore a time saver for judges and litigants. Once a court has settled a particular question of law it has established a precedent. Thanks to stare decisis lawsuits can be quickly and efficiently dismissed because legal battles can be resolved through recourse to rules and principles established prior decisions. Stare decisis can thus encourage parties to settle cases out of court and thereby enhance judicial efficiency. [ 38 ]
Several Supreme Court decisions were overruled by subsequent decisions since 1798. [ 40 ] In doing so the Supreme Court has time and time again made several statements regarding stare decisis. [ 38 ] The following is a non-exhaustive list of examples of these statements: [ 41 ]
Stare decisis applies to the holding of a case, rather than to obiter dicta ("things said by the way"). As the United States Supreme Court has put it: "dicta may be followed if sufficiently persuasive but are not binding". [ 42 ]
In the U.S. Supreme Court, the principle of stare decisis is most flexible in constitutional cases, as observed by Justice Brandeis in his landmark dissent in Burnet (as quoted at length above). [ 43 ] For example, in the years 1946–1992, the U.S. Supreme Court reversed itself in about 130 cases. [ 44 ] The U.S. Supreme Court has further explained as follows:
[W]hen convinced of former error, this Court has never felt constrained to follow precedent. In constitutional questions, where correction depends upon amendment, and not upon legislative action, this Court throughout its history has freely exercised its power to reexamine the basis of its constitutional decisions.
The Court has stated that where a court gives multiple reasons for a given result, each alternative reason that is "explicitly" labeled by the court as an "independent" ground for the decision is not treated as "simply a dictum". [ 46 ]
As Colin Starger has pointed out, the contemporary rule of stare decisis descended from Brandeis's landmark dissent in Burnet would later split into strong and weak conceptions as a result of the disagreement between Chief Justice William Rehnquist and Associate Justice Thurgood Marshall in Payne v. Tennessee (1991). [ 47 ] The strong conception requires a "special justification" to overrule challenged precedent beyond the fact the precedent was "wrongly decided", while the weak conception holds that a precedent can be overruled if it suffers from "bad reasoning". [ 47 ]
The opinion of Chief Justice John Roberts in the case June Medical Services, LLC v. Russo provides a clear statement of the strong conception of stare decisis . In this case, the Court upheld, by a 5–4 margin, their 2016 decision in Whole Woman's Health v. Hellerstedt that struck down a similar Texas law requiring doctors who perform abortions to have the right to admit patients at a nearby hospital. Roberts wrote, "The legal doctrine of stare decisis requires us, absent special circumstances, to treat like cases alike." Roberts provided the fifth vote to uphold the 2016 decision, even though he felt it was wrongly decided. [ 48 ]
The doctrine of binding precedent or stare decisis is basic to the English legal system. Special features of the English legal system include the following:
The British House of Lords , as the court of last appeal outside Scotland before it was replaced by the UK Supreme Court , was not strictly bound to always follow its own decisions until the case London Street Tramways v London County Council [1898] AC 375. After this case, once the Lords had given a ruling on a point of law, the matter was closed unless and until Parliament made a change by statute. This is the most strict form of the doctrine of stare decisis (one not applied, previously, in common law jurisdictions, where there was somewhat greater flexibility for a court of last resort to review its own precedent).
This situation changed, however, after the House of Lords issued the Practice Statement of 1966. The House of Lords decided to allow itself to adapt English law to meet changing social conditions. In R v G [2003] UKHL 50, the House of Lords overruled its 1981 decision in R v Caldwell , which had allowed the Lords to establish mens rea ("guilty mind") by measuring a defendant's conduct against that of a "reasonable person", regardless of the defendant's actual state of mind. [ 49 ]
However, the Practice Statement was seldom applied by the House of Lords, usually only as a last resort. Up to 2005, [ needs update ] the House of Lords rejected its past decisions no more than 20 times. [ 50 ] They were reluctant to use it because they feared to introduce uncertainty into the law. In particular, the Practice Statement stated that the Lords would be especially reluctant to overrule themselves in criminal cases because of the importance of certainty of that law. The first case involving criminal law to be overruled with the Practice Statement was Anderton v Ryan (1985), which was overruled by R v Shivpuri (1986), two decades after the Practice Statement. Remarkably, the precedent overruled had been made only a year before, but it had been criticised by several academic lawyers. As a result, Lord Bridge stated he was "undeterred by the consideration that the decision in Anderton v Ryan was so recent. The Practice Statement is an effective abandonment of our pretension to infallibility. If a serious error embodied in a decision of this House has distorted the law, the sooner it is corrected the better." [ 51 ] Still, the House of Lords has remained reluctant to overrule itself in some cases; in R v Kansal (2002), the majority of House members adopted the opinion that R v Lambert had been wrongly decided and agreed to depart from their earlier decision.
A precedent does not bind a court if it finds there was a lack of care in the original "Per Incuriam". For example, if a statutory provision or precedent had not been brought to the previous court's attention before its decision, the precedent would not be binding. [ citation needed ]
One of the most important roles of precedent is to resolve ambiguities in other legal texts, such as constitutions, statutes, and regulations. The process involves, first and foremost, consultation of the plain language of the text, as enlightened by the legislative history of enactment, subsequent precedent, and experience with various interpretations of similar texts.
A judge's normal aids include access to all previous cases in which a precedent has been set, and a good English dictionary.
Judges and barristers in the UK use three primary rules for interpreting the law.
Under the literal rule , the judge should do what the actual legislation states rather than trying to do what the judge thinks that it means. The judge should use the plain everyday ordinary meaning of the words, even if this produces an unjust or undesirable outcome. A good example of problems with this method is R v Maginnis (1987), [ 52 ] in which several judges in separate opinions found several different dictionary meanings of the word supply . Another example is Fisher v Bell , where it was held that a shopkeeper who placed an illegal item in a shop window with a price tag did not make an offer to sell it, because of the specific meaning of "offer for sale" in contract law , merely an invitation to treat. As a result of this case, Parliament amended the statute concerned to end this discrepancy.
The golden rule is used when use of the literal rule would obviously create an absurd result. There are two ways in which the golden rule can be applied: a narrow method, and a broad method. Under the narrow method, when there are apparently two contradictory meanings to the wording of a legislative provision, or the wording is ambiguous, the least absurd is to be preferred. Under the broad method, the court modifies the literal meaning in such a way as to avoid the absurd result. [ 53 ] An example of the latter approach is Adler v George (1964). Under the Official Secrets Act 1920 it was an offence to obstruct HM Forces "in the vicinity of" a prohibited place. Adler argued that he was not in the vicinity of such a place but was actually in it. The court chose not to read the statutory wording in a literal sense to avoid what would otherwise be an absurd result, and Adler was convicted. [ 54 ]
The mischief rule is the most flexible of the interpretation methods. Stemming from Heydon's Case (1584), it allows the court to enforce what the statute is intended to remedy rather than what the words actually say. For example, in Corkery v Carpenter (1950), a man was found guilty of being drunk in charge of a carriage, although in fact he only had a bicycle. The final rule; although will no longer be used after the UK fully transitions out of the European Union. Known as the Purposive approach- this considers the intention of the European Court of Justice when the act was passed.
In the United States, the courts have stated consistently that the text of the statute is read as it is written, using the ordinary meaning of the words of the statute.
However, most legal texts have some lingering ambiguity—inevitably, situations arise in which the words chosen by the legislature do not address the precise facts in issue, or there is some tension among two or more statutes. In such cases, a court must analyze the various available sources, and reach a resolution of the ambiguity. The "Canons of statutory construction" are discussed in a separate article . Once the ambiguity is resolved, that resolution has binding effect as described in the rest of this article.
Although inferior courts are bound in theory by superior court precedent, in practice a judge may believe that justice requires an outcome at some variance with precedent, and may distinguish the facts of the individual case on reasoning that does not appear in the binding precedent. On appeal, the appellate court may either adopt the new reasoning, or reverse on the basis of precedent. On the other hand, if the losing party does not appeal (typically because of the cost of the appeal), the lower court decision may remain in effect, at least as to the individual parties.
Occasionally, lower court judges may explicitly state a personal disagreement with the rendered judgment, but are required to rule a particular way because of binding precedent. [ 55 ] Inferior courts cannot evade binding precedent of superior courts, but a court can depart from its own prior decisions. [ 56 ]
In the United States, stare decisis can interact in counterintuitive ways with the federal and state court systems. On an issue of federal law, a state court is not bound by an interpretation of federal law at the district or circuit level, but is bound by an interpretation by the United States Supreme Court. On an interpretation of state law, whether common law or statutory law , the federal courts are bound by the interpretation of a state court of last resort, and are required normally to defer to the precedent of intermediate state courts as well. [ 57 ]
Courts may choose to obey precedent of international jurisdictions, but this is not an application of the doctrine of stare decisis , because foreign decisions are not binding. Rather, a foreign decision that is obeyed on the basis of the soundness of its reasoning will be called persuasive authority —indicating that its effect is limited to the persuasiveness of the reasons it provides.
Originalism is an approach to interpretation of a legal text in which controlling weight is given to the intent of the original authors (at least the intent as inferred by a modern judge). In contrast, a non-originalist looks at other cues to meaning, including the current meaning of the words, the pattern and trend of other judicial decisions, changing context and improved scientific understanding, observation of practical outcomes and "what works", contemporary standards of justice, and stare decisis . Both are directed at interpreting the text, not changing it—interpretation is the process of resolving ambiguity and choosing from among possible meanings, not changing the text.
The two approaches look at different sets of underlying facts that may or may not point in the same direction— stare decisis gives most weight to the newest understanding of a legal text, while originalism gives most weight to the oldest. While they do not necessarily reach different results in every case, the two approaches are in direct tension. Originalists such as Justice Antonin Scalia argue that " Stare decisis is not usually a doctrine used in civil law systems, because it violates the principle that only the legislature may make law." [ 58 ] Justice Scalia argues that America is a civil law nation, not a common law nation. By principle, originalists are generally unwilling to defer to precedent when precedent seems to come into conflict with the originalist's own interpretation of the Constitutional text or inferences of original intent (even in situations where there is no original source statement of that original intent). However, there is still room within an originalist paradigm for stare decisis ; whenever the plain meaning of the text has alternative constructions, past precedent is generally considered a valid guide, with the qualifier being that it cannot change what the text actually says.
Originalists vary in the degree to which they defer to precedent. In his confirmation hearings, Justice Clarence Thomas answered a question from Senator Strom Thurmond , qualifying his willingness to change precedent in this way:
I think overruling a case or reconsidering a case is a very serious matter. Certainly, you would have to be of the view that a case is incorrectly decided, but I think even that is not adequate. There are some cases that you may not agree with that should not be overruled. Stare decisis provides continuity to our system, it provides predictability, and in our process of case-by-case decision-making, I think it is a very important and critical concept. A judge that wants to reconsider a case and certainly one who wants to overrule a case has the burden of demonstrating that not only is the case incorrect, but that it would be appropriate, in view of stare decisis, to make that additional step of overruling that case.
Possibly he has changed his mind, or there are a very large body of cases which merit "the additional step" of ignoring the doctrine; according to Scalia, " Clarence Thomas doesn't believe in stare decisis, period. If a constitutional line of authority is wrong, he would say, let's get it right." [ 60 ]
Caleb Nelson, a former clerk for Justice Thomas and law professor at the University of Virginia, has elaborated on the role of stare decisis in originalist jurisprudence:
American courts of last resort recognize a rebuttable presumption against overruling their own past decisions. In earlier eras, people often suggested that this presumption did not apply if the past decision, in the view of the court's current members, was demonstrably erroneous. But when the Supreme Court makes similar noises today, it is roundly criticized. At least within the academy, conventional wisdom now maintains that a purported demonstration of error is not enough to justify overruling a past decision. ... [T]he conventional wisdom is wrong to suggest that any coherent doctrine of stare decisis must include a presumption against overruling precedent that the current court deems demonstrably erroneous. The doctrine of stare decisis would indeed be no doctrine at all if courts were free to overrule a past decision simply because they would have reached a different decision as an original matter. But when a court says that a past decision is demonstrably erroneous, it is saying not only that it would have reached a different decision as an original matter, but also that the prior court went beyond the range of indeterminacy created by the relevant source of law. ... Americans from the Founding on believed that court decisions could help "liquidate" or settle the meaning of ambiguous provisions of written law. Later courts generally were supposed to abide by such "liquidations". ... To the extent that the underlying legal provision was determinate, however, courts were not thought to be similarly bound by precedent that misinterpreted it. ... Of the Court's current members, Justices Scalia and Thomas seem to have the most faith in the determinacy of the legal texts that come before the Court. It should come as no surprise that they also seem the most willing to overrule the Court's past decisions. ... Prominent journalists and other commentators suggest that there is some contradiction between these Justices' mantra of "judicial restraint" and any systematic re-examination of precedent. But if one believes in the determinacy of the underlying legal texts, one need not define "judicial restraint" solely in terms of fidelity to precedent; one can also speak of fidelity to the texts themselves. [ 61 ]
One of the most prominent critics of the development of legal precedent on a case-by-case basis as both overly reactive and unfairly retroactive was philosopher Jeremy Bentham . He famously attacked the common law as "dog law":
When your dog does anything you want to break him of, you wait till he does it, and then beat him for it. This is the way you make laws for your dog: and this is the way the judges make law for you and me. [ 62 ] [ 63 ]
In a 1997 book, attorney Michael Trotter blamed overreliance by American lawyers on precedent — especially persuasive authority of marginal relevance — rather than the merits of the case at hand, as a major factor behind the escalation of legal costs during the 20th century. He argued that courts should ban the citation of persuasive authority from outside their jurisdiction and force lawyers and parties to argue only from binding precedent, subject to two exceptions:
The disadvantages of stare decisis include its rigidity, the complexity of learning law, the fact that differences between certain cases may be very small and thereby appear illogical and arbitrary, and the slow growth or incremental changes to the law that are in need of major overhaul. [ citation needed ]
An argument often leveled against precedent is that it is undemocratic because it allows judges, who may or may not be elected, to make law. [ 65 ]
A counter-argument (in favor of the advantages of stare decisis ) is that if the legislature wishes to alter the case law (other than constitutional interpretations) by statute , the legislature is empowered to do so. [ 66 ] Critics [ who? ] sometimes accuse particular judges of applying the doctrine selectively, invoking it to support precedent that the judge supported anyway, but ignoring it in order to change precedent with which the judge disagreed [ 67 ]
There is much discussion about the virtue of using stare decisis . Supporters of the system, such as minimalists , argue that obeying precedent makes decisions "predictable". For example, a business person can be reasonably assured of predicting a decision where the facts of his or her case are sufficiently similar to a case decided previously. This parallels the arguments against retroactive (ex post facto) laws banned by the U.S. Constitution . | https://en.wikipedia.org/wiki/Precedent |
The term preceramic polymer refers to one of various polymeric compounds , which through pyrolysis under appropriate conditions (generally in the absence of oxygen) are converted to ceramic compounds, having high thermal and chemical stability. Ceramics resulting from the pyrolysis of preceramic polymers are known as polymer derived ceramics , or PDCs. Polymer derived ceramics are most often silicon based and include silicon carbide , silicon oxycarbide, silicon nitride and silicon oxynitride. Such PDCs are most commonly amorphous, lacking long-range crystalline order. [ 1 ]
The field of preceramic polymers and polymer derived ceramics in general emerged from the requirements in aerospace industries for heat shield materials such as fiber reinforced ceramic / ceramic composite materials. [ 2 ] The use of preceramic polymers allows for diverse processing techniques relative to conventional ceramic processing. For example, the spinning of fibres, casting of thin films and the molding of complex shapes. Commonly used preceramic polymers include polycarbosilanes and polysiloxanes , which transform through pyrolysis to SiC and SiOC type ceramics respectively. [ 3 ]
A low-cost method of creating complex 3D shapes of ceramics components is to use additive manufacturing (AM) in a use a two-step process of first printing the artifact in polymer and then converting it to ceramic using pyrolysis to form polymer derived ceramics (PDCs). [ 4 ] This process works with fused filament fabrication (FFF)-based 3-D printing to make fully dense cellular structures, [ 5 ] which can be used for scaffolds for bone regeneration that need to be mechanically stable and have a 3D architecture with interconnected pores. [ 6 ] Various other 3D printing techniques (e.g., stereolithography , digital light processing , and two-photon polymerization ) that are compatible with this strategy have so far been widely investigated. [ 7 ] For example, through photopolymerization methods, preceramic polymers can be used in stereolithography approaches, enabling the additive manufacturing of complex shaped ceramic objects. In such methods, by means of irradiation-driven cross-linking, liquid preceramic polymers transform into rigid thermoset polymers that preserve their shape through the following polymer-to-ceramic transformation that takes place in pyrolysis. In this transformation, polymers transform into glassy ceramic products. [ 1 ] | https://en.wikipedia.org/wiki/Preceramic_polymer |
Precession is the process of a round part in a round hole, rotating with respect to each other, wherein the inner part begins rolling around the circumference of the outer bore, in a direction opposite of rotation. This is caused by too much clearance between them and a radial force on the part that constantly changes direction. The direction of rotation of the inner part is opposite to the direction of rotation of the radial force. [ 1 ]
In a rotating machine, such as motor, engine, gear train, etc., precession can occur when too much clearance exists between a shaft and a bushing , or between the races and rolling elements in roller and ball bearings . Often a result of wear, inadequate lubrication (too little or too thin), or lack of precision engineering , such precession is usually accompanied by excess vibration and an audible rubbing or buzzing noise. This tends to accelerate the wear process, possibly leading to spalling , galling , or false brinelling (fretting wear) of the contact surfaces.
In stationary parts on a rotating object, such as a bolt threaded into a hole, because the sideways, or radial, load constantly shifts position during use, this lateral force translates into a rolling force that moves opposite to the direction of rotation. This can cause threaded parts to either tighten or loosen under a load, depending on the direction of rotation, typically with a force that can far exceed the typical torque of a wrench. For example, this is a common problem in bicycle pedals, thus on nearly all bikes built after the 1930s, the left-side pedal is equipped with left-hand (backwards) threads, to prevent it from unscrewing itself while riding. [ 1 ]
This precession is a process purely due to contact forces and does not depend on inertia and is not inversely proportional to spin rate. It is completely unrelated to torque-free and torque-induced precession .
Precession caused by fretting can cause fastenings under large torque loads to unscrew themselves.
Automobiles have also used left-threaded lug nuts on left-side wheels, but now commonly use tapered lug nuts, which do not precess.
Bicycle pedals are left-threaded on the left-hand crank so that precession tightens the pedal rather than loosening it. This may seem counter-intuitive since the pedals rotate in the direction that would unscrew them from the cranks, but the torque exerted due to the precession is several orders of magnitude greater than that caused by bearing friction or even a jammed pedal bearing.
For a pedal, a rotating load arises from downward pedaling force on a spindle rotating with its crank making the predominantly downward force effectively rotate about the pedal spindle, opposite to the rotation of the pedal. What may be less evident is that even tightly fitting parts have relative clearance due to their elasticity, metals not being rigid materials as is evident from steel springs. Under load, micro deformations, enough to cause motion, occur in such joints. This can be seen from wear marks where pedal spindles seat on crank faces." [ 1 ]
Shimano SPD axle units, which can be unscrewed from the pedal body for servicing, have a left-hand thread where the axle unit screws into the right-hand pedal; the opposite case to the pedal-crank interface. Otherwise precession of the pedal body around the axle would tend to unscrew one from the other.
English threaded bicycle bottom brackets are left-threaded on the right-hand (usually drive) side into the bottom bracket shell . This is the opposite of pedals into cranks because the sense of the relative motion between the parts is opposite. (Italian and French threaded bottom brackets have right-hand threading on both sides.)
Splined sprockets precess against any lockring which is screwed into the freehub. Shimano uses a lockring with detents to hold cassette sprockets in place, and this resists precession. Sturmey-Archer once used 12-splined sprockets for 2- and 3-speed racing hubs, and these were secured with a left-threaded lockring for the same reason. ( Fixed gear bicycles also use a left-threaded lockring but this is not because of precession; it is merely to ensure that the lockring tends to tighten, should the sprocket begin to unscrew.)
A bearing supported gear in a manual transmission rotates synchronously with its shaft due to the dog-gear engagement. In this case, the small diametrical clearance in the bearing will induce precession of the roller group relative to the gear mitigating any fretting that occurs if the same bearing rollers always push against the same spot on the gear. Typically the 4th and 5th gears will have precession inducing features, while 1st through 3rd gears might not since cars spend less time in those gears. Transmission failure due to lack of precession is possible in gear boxes when low gears are engaged for long periods of time. [ citation needed ] | https://en.wikipedia.org/wiki/Precession_(mechanical) |
Precession electron diffraction ( PED ) is a specialized method to collect electron diffraction patterns in a transmission electron microscope (TEM). By rotating (precessing) a tilted incident electron beam around the central axis of the microscope, a PED pattern is formed by integration over a collection of diffraction conditions. This produces a quasi-kinematical diffraction pattern that is more suitable as input into direct methods algorithms to determine the crystal structure of the sample.
Precession electron diffraction is accomplished utilizing the standard instrument configuration of a modern TEM . The animation illustrates the geometry used to generate a PED pattern. Specifically, the beam tilt coils located pre-specimen are used to tilt the electron beam off of the optic axis so it is incident with the specimen at an angle, φ. The image shift coils post-specimen are then used to tilt the diffracted beams back in a complementary manner such that the direct beam falls in the center of the diffraction pattern. Finally, the beam is precessed around the optic axis while the diffraction pattern is collected over multiple revolutions.
The result of this process is a diffraction pattern that consists of a summation or integration over the patterns generated during precession. While the geometry of this pattern matches the pattern associated with a normally incident beam, the intensities of the various reflections approximate those of the kinematical pattern much more closely. At any moment in time during precession, the diffraction pattern consists of a Laue circle with a radius equal to the precession angle, φ. These snapshots contain far fewer strongly excited reflections than a normal zone axis pattern and extend farther into reciprocal space . Thus, the composite pattern will display far less dynamical character, and will be well suited for use as input into direct methods calculations. [ 2 ]
PED possesses many advantageous attributes that make it well suited to investigating crystal structures via direct methods approaches: [ 1 ]
Precession electron diffraction is typically conducted using accelerating voltages between 100-400 kV. Patterns can be formed under parallel or convergent beam conditions. Most modern TEMs can achieve a tilt angle, φ, ranging from 0-3°. Precession frequencies can be varied from Hz to kHz, but in standard cases 60 Hz has been used. [ 1 ] In choosing a precession rate, it is important to ensure that many revolutions of the beam occur over the relevant exposure time used to record the diffraction pattern. This ensures adequate averaging over the excitation error of each reflection. Beam sensitive samples may dictate shorter exposure times and thus, motivate the use of higher precession frequencies.
One of the most significant parameters affecting the diffraction pattern obtained is the precession angle, φ. In general, larger precession angles result in more kinematical diffraction patterns, but both the capabilities of the beam tilt coils in the microscope and the requirements on the probe size limit how large this angle can become in practice. Because PED takes the beam off of the optic axis by design, it accentuates the effect of the spherical aberrations within the probe forming lens. For a given spherical aberration, C s , the probe diameter, d, varies with convergence angle, α, and precession angle, φ, as [ 3 ]
Thus, if the specimen of interest is quite small, the maximum precession angle will be restrained. This is most significant for conditions of convergent beam illumination. 50 nm is a general lower limit on probe size for standard TEMs operating at high precession angles (>30 mrad ), but can be surpassed in C s corrected instruments. [ 4 ] In principle the minimum precessed probe can reach approximately the full-width-half-max (FWHM) of the converged un-precessed probe in any instrument, however in practice the effective precessed probe is typically ~10-50x larger due to uncontrolled aberrations present at high angles of tilt. For example, a 2 nm precessed probe with >40 mrad precession angle was demonstrated in an aberration-corrected Nion UltraSTEM with native sub-Å probe (aberrations corrected to ~35 mrad half-angle). [ 5 ]
If the precession angle is made too large, further complications due to the overlap of the ZOLZ and HOLZ reflections in the projected pattern can occur. This complicates the indexing of the diffraction pattern and can corrupt the measured intensities of reflections near the overlap region, thereby reducing the effectiveness of the collected pattern for direct methods calculations.
For an introduction to the theory of electron diffraction, see part 2 of Williams and Carter's Transmission Electron Microscopy text [ 6 ]
While it is clear that precession reduces many of the dynamical diffraction effects that plague other forms of electron diffraction, the resulting patterns cannot be considered purely kinematical in general. There are models that attempt to introduce corrections to convert measured PED patterns into true kinematical patterns that can be used for more accurate direct methods calculations, with varying degrees of success. Here, the most basic corrections are discussed. In purely kinematical diffraction, the intensities of various g {\displaystyle \mathbf {g} } reflections, I g k i n e m a t i c a l {\displaystyle I_{\mathbf {g} }^{kinematical}} , are related to the square of the amplitude of the structure factor , F g {\displaystyle F_{\mathbf {g} }} by the equation:
This relationship is generally far from accurate for experimental dynamical electron diffraction and when many reflections have a large excitation error . First, a Lorentz correction analogous to that used in x-ray diffraction can be applied to account for the fact that reflections are infrequently exactly at the Bragg condition over the course of a PED measurement. This geometrical correction factor can be shown to assume the approximate form: [ 7 ]
where g is the reciprocal space magnitude of the reflection in question and R o is the radius of the Laue circle, usually taken to be equal to φ. While this correction accounts for the integration over the excitation error, it takes no account for the dynamical effects that are ever-present in electron diffraction. This has been accounted for using a two-beam correction following the form of the Blackman correction originally developed for powder x-ray diffraction . Combining this with the aforementioned Lorentz correction yields:
where A g = 2 π t F g k {\displaystyle A_{\mathbf {g} }={\frac {2\pi tF_{\mathbf {g} }}{k}}} , t {\displaystyle t} is the sample thickness, and k {\displaystyle k} is the wave-vector of the electron beam. J 0 {\displaystyle J_{0}} is the Bessel function of zeroeth order.
This form seeks to correct for both geometric and dynamical effects, but is still only an approximation that often fails to significantly improve the kinematic quality of the diffraction pattern (sometimes even worsening it). More complete and accurate treatments of these theoretical correction factors have been shown to adjust measured intensities into better agreement with kinematical patterns. For details, see Chapter 4 of reference. [ 1 ]
Only by considering the full dynamical model through multislice calculations can the diffraction patterns generated by PED be simulated. However, this requires the crystal potential to be known, and thus is most valuable in refining the crystal potentials suggested through direct methods approaches. The theory of precession electron diffraction is still an active area of research, and efforts to improve on the ability to correct measured intensities without a priori knowledge are ongoing.
The first precession electron diffraction system was developed by Vincent and Midgley in Bristol, UK and published in 1994. Preliminary investigation into the Er 2 Ge 2 O 7 crystal structure demonstrated the feasibility of the technique at reducing dynamical effects and providing quasi-kinematical patterns that could be solved through direct methods to determine crystal structure. [ 3 ] Over the next ten years, a number of university groups developed their own precession systems and verified the technique by solving complex crystal structures, including the groups of J. Gjønnes (Oslo), Migliori (Bologna), and L. Marks (Northwestern). [ 1 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
In 2004, NanoMEGAS developed the first commercial precession system capable of being retrofit to any modern TEM. This hardware solution enabled more widespread implementation of the technique and spurred its more mainstream adoption into the crystallography community. Software methods have also been developed to achieve the necessary scanning and descanning using the built-in electronics of the TEM. [ 12 ] HREM Research Inc has developed the QED plug-in for the DigitalMicrograph software. This plug-in enables the widely used software package to collect precession electron diffraction patterns without additional modifications to the microscope.
According to NanoMEGAS, as of June, 2015, more than 200 publications have relied on the technique to solve or corroborate crystal structures; many on materials that could not be solved by other conventional crystallography techniques like x-ray diffraction. Their retrofit hardware system is used in more than 75 laboratories across the world. [ 13 ]
The primary goal of crystallography is to determine the three dimensional arrangement of atoms in a crystalline material. While historically, x-ray crystallography has been the predominant experimental method used to solve crystal structures ab initio , the advantages of precession electron diffraction make it one of the preferred methods of electron crystallography .
Mapping the relative orientation of crystalline grains and/or phases helps understand material texture at the micro and nano scales. In a transmission electron microscope , this is accomplished by recording a diffraction pattern at a large number of points (pixels) over a region of the crystalline specimen. By comparing the recorded patterns to a database of known patterns (either previously indexed experimental patterns or simulated patterns), the relative orientation of grains in the field of view can be determined.
Because this process is highly automated, the quality of the recorded diffraction patterns is crucial to the software's ability to accurately compare and assign orientations to each pixel. Thus, the advantages of PED are well-suited for use with this scanning technique. By instead recording a PED pattern at each pixel, dynamical effects are reduced, and the patterns are more easily compared to simulated data, improving the accuracy of the automated phase/orientation assignment. [ 4 ]
Although the PED technique was initially developed for its improved diffraction applications, the advantageous properties of the technique have been found to enhance many other investigative techniques in the TEM. These include bright field and dark field imaging , electron tomography , and composition-probing techniques like energy-dispersive x-ray spectroscopy (EDS) and electron energy loss spectroscopy (EELS).
Though many people conceptualize images and diffraction patterns separately, they contain principally the same information. In the simplest approximation, the two are simply Fourier transforms of one another. Thus, the effects of beam precession on diffraction patterns also have significant effects on the corresponding images in the TEM. Specifically, the reduced dynamical intensity transfer between beams that is associated with PED results in reduced dynamical contrast in images collected during precession of the beam. This includes a reduction in thickness fringes, bend contours, and strain fields. [ 13 ] While these features can often provide useful information, their suppression enables a more straightforward interpretation of diffraction contrast and mass contrast in images.
In an extension of the application of PED to imaging, electron tomography can benefit from the reduction of dynamic contrast effects. Tomography entails collecting a series of images (2-D projections) at various tilt angles and combining them to reconstruct the three dimensional structure of the specimen. Because many dynamical contrast effects are highly sensitive to the orientation of the crystalline sample with respect to the incident beam, these effects can convolute the reconstruction process in tomography. Similarly to single imaging applications, by reducing dynamical contrast, interpretation of the 2-D projections and thus the 3-D reconstruction are more straightforward.
Energy-dispersive x-ray spectroscopy (EDS) and electron energy loss spectroscopy (EELS) are commonly used techniques to both qualitatively and quantitatively probe the composition of samples in the TEM. A primary challenge in the quantitative accuracy of both techniques is the phenomenon of channelling . Put simply, in a crystalline solid, the probability of interaction between an electron and ion in the lattice depends strongly on the momentum (direction and velocity) of the electron. When probing a sample under diffraction conditions near a zone axis, as is often the case in EDS and EELS applications, channelling can have a large impact on the effective interaction of the incident electrons with specific ions in the crystal structure. In practice, this can lead to erroneous measurements of composition that depend strongly on the orientation and thickness of the sample and the accelerating voltage. Since PED entails an integration over incident directions of the electron probe, and generally does not include beams parallel to the zone axis, the detrimental channeling effects outlined above can be minimized, yielding far more accurate composition measurements in both techniques. [ 29 ] [ 30 ] | https://en.wikipedia.org/wiki/Precession_electron_diffraction |
Precious Plastic is an open hardware plastic recycling project and is a type of open source digital commons project. The project was started in 2013 by Dave Hakkens and is now in its fourth iteration. It relies on a series of machines and tools which grind, melt, and inject recycled plastic, allowing for the creation of new products out of recycled plastic on a small scale.
In 2012, Dave Hakkens started working on Precious Plastic as a part of his studies at the Design Academy in Eindhoven . [ 1 ] The project was released in 2013 as Version 1.0. [ 1 ]
The work on version 2 was started in 2015 [ 1 ] and was released in March 2016. [ 1 ] [ 2 ] In 2016, Precious Plastic also created a marketplace called Bazar for selling machines and products targeted to DIY designers to recycle plastic. [ 3 ]
The team started working on version 3.0 from early 2017 and was launched in October 2017. [ 1 ]
In May 2018, Precious Plastic received the Famae award of €300,000 to further develop the project. [ 4 ] The city of Eindhoven also provided them a big workspace free of charge. [ 4 ] In October 2018, Precious Plastic project officially opened its doors at the VDMA building in Eindhoven. [ 5 ] The work on Version 4.0 was started in September 2018. [ 6 ] [ 1 ]
In 2019 Hakkens and Precious Plastic were involved in disagreement over whether to burn or recycle plastics collected from the oceans . [ 7 ]
The version 4, which includes business models and starter kits for creating recycling systems, was announced in January 2020. [ 6 ] [ 1 ]
In December 2020, One Army was launched as an umbrella organization for a growing collection of projects including Precious Plastic, Project Kamp, PhoneBloks , Fixing Fashion, and Story Hopper. [ 8 ] [ 9 ]
Fixing Fashion was launched in March 2021. [ 10 ] [ 11 ]
Precious Plastic is an open hardware plastic recycling project and is a type of open source digital commons project. [ 12 ] It relies on a series of machines and tools which grind, melt, and inject recycled plastic, allowing for the creation of new products out of recycled plastic on a small scale. [ 13 ] The project allows individual consumers to set up "their own miniature recycling company". [ 14 ]
The project is composed of more than 40,000 people [ 15 ] in over 400 work spaces, either remotely or on site in the Netherlands. [ 16 ] [ 17 ] [ 3 ] All the information produced by the project such as codes, drawings, and source materials are available for free online under the Creative Commons Attribution - Share Alike International 4.0 license. [ 12 ]
Precious Plastic Fiji was formed in 2017 as a NGO dedicated to eliminating plastic waste . [ 18 ]
In 2018 after a workshop in China, a company, Plastplan, grew out of the Precious Plastic project in Iceland to promote an alternative to shipping plastic to Sweden to be burned for electricity. [ 19 ] [ 20 ]
In Hawaii in 2019, Puna Precious Plastic, with more than 1,000 members as a part of the Precious Plastic worldwide movement, collected about 1,000 pounds, which it planned to sort, shred and melt into plastic bricks and lumber for construction. [ 21 ] [ 22 ] [ 23 ]
In Thailand, Precious Plastic Bangkok collects plastic bottle caps to shred, melt, and reshape into new products, including monk's robes. [ 24 ] [ 25 ]
With a grant from Dane County Arts and partnered with Community GroundWorks, the nonprofit that oversees Troy Kids’ Garden, and hackerspace Sector 67, a branch of Precious Plastic was launched in Madison, Wisconsin. [ 26 ]
In September 2021, One Army announced a "Verified" Precious Plastic workspaces program to give recognition to "high quality recycling work". Many locations around the world were listed. [ 27 ]
In 2018, a group called Precious Plastic Texas was formed by students at the University of Texas after learning about what was being done in Thailand. [ 28 ] In 2019 students in the Environmental Fellows Program's gateway seminar at DePauw University in Indiana began work on a Precious Plastic project, and received funding from the Joseph and Carol Danks Centers Council Fund for Multidisciplinary Projects. The project will continue in a gateway seminar and three art classes, and they may add an off-campus trip to a Precious Plastic site. [ 29 ] In Australia, UNSW business school students, working closely with Precious Plastic, won the 2019 Big Idea competition in the postgraduate category with their start-up idea called Closed Loop – a local-level plastic waste recycling business. [ 30 ] Engineering students at the Monash University chapter created a Precious Plastic one-metre cube portable recycling machine to transport to events for display. [ 31 ] | https://en.wikipedia.org/wiki/Precious_Plastic |
In materials science , a precipitate-free zone ( PFZ ) refers to microscopic localized regions around grain boundaries that are free of precipitates (solid impurities forced outwards from the grain during crystallization ). It is a common phenomenon that arises in polycrystalline materials ( crystalline materials with stochastically -oriented grains) where heterogeneous nucleation of precipitates is the dominant nucleation mechanism. [ 1 ] [ 2 ] [ 3 ] This is because grain boundaries are high-energy surfaces that act as sinks for vacancies , causing regions adjacent to a grain boundary to be devoid of vacancies. [ 4 ] As it is energetically favorable for heterogeneous nucleation to occur preferentially around defect-rich sites such as vacancies, nucleation of precipitates is impeded in the vacancy-free regions immediately adjacent to grain boundaries [ 4 ]
Pioneering studies on the theory [ 5 ] and experimental observation [ 6 ] of PFZs were made in the 1960s.
PFZs are detrimental to the mechanical properties of materials. [ 3 ] In particular, PFZs degrade the material's hardness, because the lack of precipitates in PFZs lead to these regions having fewer pinning sites. Dislocation motion – a condition necessary to cause a material to yield – will require an appreciably lower applied shear stress in PFZs, and consequently these locally weak zones will lead to plastic deformation. [ 7 ] [ 8 ] The width of PFZs have also been found to be negatively correlated with intergranular fracture [ 1 ] [ 7 ] [ 8 ]
PFZs also accelerate pitting corrosion and stress corrosion cracking , significantly reducing the usable life of these materials in chemically aggressive environments. [ 9 ]
It has been shown that PFZs can be minimized by quenching . First, quenching increases undercooling , favoring homogeneous nucleation in PFZs as it lowers the nucleation energy barrier even in the absence of potent nucleation sites. Additionally, low temperatures also lead to a reduction in diffusion rates, minimizing the loss of vacancies and premature growth of grain boundary precipitates. [ 5 ] However, since diffusion rates at low temperatures are suppressed, the aging time (time taken for treatment to yield a desired grain size) would be long. Therefore, one processing technique to circumvent this is to increase the temperature slightly once a sufficient number of homogeneous nucleation sites have been formed.
Another technique to minimize PFZs is to introduce impurity elements, as they strongly interact with vacancies and allow for a more even distribution of vacancies in the material. [ 10 ] [ 5 ] [ 11 ] One example would be to introduce Mg in Al alloys [ 3 ]
Cyclic strengthening (CS), a process wherein a material is mechanically pushed and pulled repeatedly at room temperature, creates fine precipitates that is homogeneously distributed throughout the microstructure. [ 12 ] It has been suggested as an alternative to conventional, precipitate hardened alloys as this process achieves strengthening effects without introducing PFZs. | https://en.wikipedia.org/wiki/Precipitate-free_zone |
Precipitation hardening , also called age hardening or particle hardening , is a heat treatment technique used to increase the yield strength of malleable materials, including most structural alloys of aluminium , magnesium , nickel , titanium , and some steels , stainless steels , and duplex stainless steel . In superalloys , it is known to cause yield strength anomaly providing excellent high-temperature strength.
Precipitation hardening relies on changes in solid solubility with temperature to produce fine particles of an impurity phase , which impede the movement of dislocations , or defects in a crystal 's lattice . Since dislocations are often the dominant carriers of plasticity , this serves to harden the material. The impurities play the same role as the particle substances in particle-reinforced composite materials. Just as the formation of ice in air can produce clouds, snow, or hail, depending upon the thermal history of a given portion of the atmosphere, precipitation in solids can produce many different sizes of particles, which have radically different properties. Unlike ordinary tempering , alloys must be kept at elevated temperature for hours to allow precipitation to take place. This time delay is called "aging". Solution treatment and aging is sometimes abbreviated "STA" in specifications and certificates for metals.
Two different heat treatments involving precipitates can alter the strength of a material: solution heat treating and precipitation heat treating. Solid solution strengthening involves formation of a single-phase solid solution via quenching. Precipitation heat treating involves the addition of impurity particles to increase a material's strength. [ 1 ]
This technique exploits the phenomenon of supersaturation , and involves careful balancing of the driving force for precipitation and the thermal activation energy available for both desirable and undesirable processes.
Nucleation occurs at a relatively high temperature (often just below the solubility limit) so that the kinetic barrier of surface energy can be more easily overcome and the maximum number of precipitate particles can form. These particles are then allowed to grow at lower temperature in a process called ageing . This is carried out under conditions of low solubility so that thermodynamics drive a greater total volume of precipitate formation.
Diffusion 's exponential dependence upon temperature makes precipitation strengthening, like all heat treatments, a fairly delicate process. Too little diffusion ( under ageing ), and the particles will be too small to impede dislocations effectively; too much ( over ageing ), and they will be too large and dispersed to interact with the majority of dislocations.
Precipitation strengthening is possible if the line of solid solubility slopes strongly toward the center of a phase diagram . While a large volume of precipitate particles is desirable, a small enough amount of the alloying element should be added so that it remains easily soluble at some reasonable annealing temperature. Although large volumes are often wanted, they are wanted in small particle sizes as to avoid a decrease in strength as is explained below.
Elements used for precipitation strengthening in typical aluminium and titanium alloys make up about 10% of their composition. While binary alloys are more easily understood as an academic exercise, commercial alloys often use three components for precipitation strengthening, in compositions such as Al(Mg, Cu ) and Ti(Al, V ). A large number of other constituents may be unintentional, but benign, or may be added for other purposes such as grain refinement or corrosion resistance. An example is the addition of Sc and Zr to aluminum alloys to form FCC L1 2 structures that help refine grains and strengthen the material. [ 2 ] In some cases, such as many aluminium alloys, an increase in strength is achieved at the expense of corrosion resistance. More recent technology is focused on additive manufacturing due to the higher amount of metastable phases that can be obtained due to the fast cooling, whereas traditional casting is more limited to equilibrium phases. [ 3 ]
The addition of large amounts of nickel and chromium needed for corrosion resistance in stainless steels means that traditional hardening and tempering methods are not effective. However, precipitates of chromium, copper, or other elements can strengthen the steel by similar amounts in comparison to hardening and tempering. The strength can be tailored by adjusting the annealing process, with lower initial temperatures resulting in higher strengths. The lower initial temperatures increase the driving force of nucleation. More driving force means more nucleation sites, and more sites means more places for dislocations to be disrupted while the finished part is in use.
Many alloy systems allow the ageing temperature to be adjusted. For instance, some aluminium alloys used to make rivets for aircraft construction are kept in dry ice from their initial heat treatment until they are installed in the structure. After this type of rivet is deformed into its final shape, ageing occurs at room temperature and increases its strength, locking the structure together. Higher ageing temperatures would risk over-ageing other parts of the structure, and require expensive post-assembly heat treatment because a high ageing temperature promotes the precipitate to grow too readily.
There are several ways by which a matrix can be hardened by precipitates, which could also be different for deforming precipitates and non-deforming precipitates. [ 4 ]
Deforming particles (weak precipitates):
Coherency hardening occurs when the interface between the particles and the matrix is coherent, which depends on parameters like particle size and the way that particles are introduced. Coherency is where the lattice of the precipitate and that of the matrix are continuous across the interface. [ 5 ] Small particles precipitated from supersaturated solid solution usually have coherent interfaces with the matrix. Coherency hardening originates from the atomic volume difference between precipitate and the matrix, which results in a coherency strain. If the atomic volume of the precipitate is smaller, there will be tension because the lattice atoms are located closer than their normal conditions while when the atomic volume of the precipitate is larger, there will be compression of the lattice atoms, as they are further apart than their normal position. Regardless of whether the lattice is under compression or tension, the associated stress field interacts with dislocations leading to decreased dislocation motion either by repulsion or attraction of the dislocations, leading to an increase in yield strength, similar to the size effect in solid solution strengthening. What differentiates this mechanism from solid solution strengthening is the fact that the precipitate has a definite size, not an atom, and therefore a stronger interaction with dislocations.
Modulus hardening results from the different shear modulus of the precipitate and the matrix, which leads to an energy change of dislocation line tension when the dislocation line cuts the precipitate. Also, the dislocation line could bend when entering the precipitate, increasing the affected length of the dislocation line. Again, the strengthening arises in a way similar to that of solid solution strengthening, where there is a mismatch in the lattice that interacts with the dislocations, impeding their motion. Of course, the severity of the interaction is different than that of solid solution and coherency strengthening.
Chemical strengthening is associated with the surface energy of the newly introduced precipitate-matrix interface when the particle is sheared by dislocations. Because it takes energy to make the surface, some of the stress that is causing dislocation motion is accommodated by the additional surfaces. Like modulus hardening, the analysis of interfacial area can be complicated by dislocation line distortion.
Order strengthening occurs when the precipitate is an ordered structure such that bond energy before and after shearing is different. For example, in an ordered cubic crystal with composition AB, the bond energy of A-A and B-B after shearing is higher than that of the A-B bond before. The associated energy increase per unit area is anti-phase boundary energy and accumulates gradually as the dislocation passes through the particle. However, a second dislocation could remove the anti-phase domain left by the first dislocation when traverses the particle. The attraction of the particle and the repulsion of the first dislocation maintains a balanced distance between two dislocations, which makes order strengthening more complicated. Except for when there are very fine particles, this mechanism is generally not as effective as others to strengthen. Another way to consider this mechanism is that when a dislocation shears a particle, the stacking sequence between the new surface made and the matrix is broken, and the bonding is not stable. To get the sequence back into this interface, another dislocation, is needed to shift the stacking. The first and second dislocation are often called a superdislocation. Because superdislocations are required to shear these particles, there is strengthening because of the decreased dislocation motion.
Non-deforming particles (strong precipitate):
In non-deforming particles, where the spacing is small enough or the precipitate-matrix interface is disordered, dislocation bows instead of shears. The strengthening is related to the effective spacing between particles considering finite particle size, but not particle strength, because once the particle is strong enough for the dislocations to bow rather than cut, further increase of the dislocation penetration resistance won't affect strengthening. The main mechanism therefore is Orowan strengthening, where the strong particles do not allow for dislocations to move past. Therefore bowing must occur and in this bowing can cause dislocation loops to build up, which decreases the space available for additional dislocation to bow between. If the dislocations cannot shear particles and cannot move past them, then dislocation motion is successfully impeded.
The primary species of precipitation strengthening are second phase particles. These particles impede the movement of dislocations throughout the lattice. You can determine whether or not second phase particles will precipitate into solution from the solidus line on the phase diagram for the particles. Physically, this strengthening effect can be attributed both to size and modulus effects , and to interfacial or surface energy . [ 4 ] [ 6 ]
The presence of second phase particles often causes lattice distortions. These lattice distortions result when the precipitate particles differ in size and crystallographic structure from the host atoms. Smaller precipitate particles in a host lattice leads to a tensile stress, whereas larger precipitate particles leads to a compressive stress. Dislocation defects also create a stress field. Above the dislocation there is a compressive stress and below there is a tensile stress. Consequently, there is a negative interaction energy between a dislocation and a precipitate that each respectively cause a compressive and a tensile stress or vice versa. In other words, the dislocation will be attracted to the precipitate. In addition, there is a positive interaction energy between a dislocation and a precipitate that have the same type of stress field. This means that the dislocation will be repulsed by the precipitate.
Precipitate particles also serve by locally changing the stiffness of a material. Dislocations are repulsed by regions of higher stiffness. Conversely, if the precipitate causes the material to be locally more compliant, then the dislocation will be attracted to that region. In addition, there are three types of interphase boundaries (IPBs).
The first type is a coherent or ordered IPB, the atoms match up one by one along the boundary. Due to the difference in lattice parameters of the two phases, a coherency strain energy is associated with this type of boundary. The second type is a fully disordered IPB and there are no coherency strains, but the particle tends to be non-deforming to dislocations. The last one is a partially ordered IPB, so coherency strains are partially relieved by the periodic introduction of dislocations along the boundary.
In coherent precipitates in a matrix, if the precipitate has a lattice parameter less than that of the matrix, then the atomic match across the IPB leads to an internal stress field that interacts with moving dislocations.
There are two deformation paths, one is the coherency hardening , the lattice mismatch is
Where G {\displaystyle G} is the shear modulus, ε c o h {\displaystyle \varepsilon _{coh}} is the coherent lattice mismatch, r {\displaystyle r} is the particle radius, f {\displaystyle f} is the particle volume fraction, b {\displaystyle b} is the burgers vector, r f / b {\displaystyle rf/b} equals the concentration.
The other one is modulus hardening . The energy of the dislocation energy is U m = G m b 2 / 2 {\displaystyle U_{m}=G_{m}b^{2}/2} , when it cuts through the precipitate, its energy is U p = G p b 2 / 2 {\displaystyle U_{p}=G_{p}b^{2}/2} , the change in line segment energy is
The maximum dislocation length affected is the particle diameter, the line tension change takes place gradually over a distance equal to r {\displaystyle r} . The interaction force between the dislocation and the precipitate is
Furthermore, a dislocation may cut through a precipitate particle, and introduce more precipitate-matrix interface, which is chemical strengthening . When the dislocation is entering the particle and is within the particle, the upper part of the particle shears b with respect to the lower part accompanies the dislocation entry. A similar process occurs when the dislocation exits the particle. The complete transit is accompanied by creation of matrix-precipitate surface area of approximate magnitude A = 2 π r b {\displaystyle A=2\pi rb\,\!} , where r is the radius of the particle and b is the magnitude of the burgers vector. The resulting increase in surface energy is E = 2 π r b γ s {\displaystyle E=2\pi rb\gamma _{s}\,\!} , where γ s {\displaystyle \gamma _{s}} is the surface energy. The maximum force between the dislocation and particle is F m a x = π r γ s {\displaystyle F_{max}=\pi r\gamma _{s}\,\!} , the corresponding flow stress should be Δ τ = F m a x / b L = π r γ s / b L {\displaystyle \Delta \tau =F_{max}/bL=\pi r\gamma _{s}/bL} .
When a particle is sheared by a dislocation, a threshold shear stress is needed to deform the particle. The expression for the required shear stress is as follows:
When the precipitate size is small, the required shear stress τ {\displaystyle \tau } is proportional to the precipitate size r 1 / 2 {\displaystyle r^{1/2}} , However, for a fixed particle volume fraction, this stress may decrease at larger values of r owing to an increase in particle spacing. The overall level of the curve is raised by increases in either inherent particle strength or particle volume fraction.
The dislocation can also bow around a precipitate particle through so-called Orowan mechanism.
Since the particle is non-deforming, the dislocation bows around the particles ( ϕ c = 0 {\displaystyle \phi _{c}=0} ), the stress required to effect the bypassing is inversely proportional to the interparticle spacing ( L − 2 r ) {\displaystyle (L-2r)} , that is, τ b = G b / ( L − 2 r ) {\displaystyle \tau _{b}=Gb/(L-2r)} , where r {\displaystyle r} is the particle radius. Dislocation loops encircle the particles after the bypass operation, a subsequent dislocation would have to be extruded between the loops. Thus, the effective particle spacing for the second dislocation is reduced to ( L − 2 r ′ ) {\displaystyle (L-2r')} with r ′ > r {\displaystyle r'>r} , and the bypassing stress for this dislocation should be τ b ′ = G b / ( L − 2 r ′ ) {\displaystyle \tau _{b}'=Gb/(L-2r')} , which is greater than for the first one. However, as the radius of particle increases, L {\displaystyle L} will increase so as to maintain the same volume fraction of precipitates, ( L − 2 r ) {\displaystyle (L-2r)} will increase and τ b {\displaystyle \tau _{b}} will decrease. As a result, the material will become weaker as the precipitate size increases.
For a fixed particle volume fraction, τ b {\displaystyle \tau _{b}} decreases with increasing r as this is accompanied by an increase in particle spacing.
On the other hand, increasing f {\displaystyle f} increases the level of the stress as a result of a finer particle spacing. The level of τ b {\displaystyle \tau _{b}} is unaffected by particle strength. That is, once a particle is strong enough to resist cutting, any further increase in its resistance to dislocation penetration has no effect on τ b {\displaystyle \tau _{b}} , which depends only on matrix properties and effective particle spacing.
If particles of A of volume fraction f 1 {\displaystyle f_{1}} are dispersed in a matrix, particles are sheared for r < r c 1 {\displaystyle r<r_{c1}} and are bypassed for r > r c 1 {\displaystyle r>r_{c1}} , maximum strength is obtained at r = r c 1 {\displaystyle r=r_{c1}} , where the cutting and bowing stresses are equal. If inherently harder particles of B of the same volume fraction are present, the level of the τ c {\displaystyle \tau _{c}} curve is increased but that of the τ b {\displaystyle \tau _{b}} one is not. Maximum hardening, greater than that for A particles, is found at r c 2 < r c 1 {\displaystyle r_{c2}<r_{c1}} . Increasing the volume fraction of A raises the level of both τ b {\displaystyle \tau _{b}} and τ c {\displaystyle \tau _{c}} and increases the maximum strength obtained. The latter is found at r c 3 {\displaystyle r_{c3}} , which may be either less than or greater than r c 1 {\displaystyle r_{c1}} depending on the shape of the τ − r {\displaystyle \tau -r} curve.
There are two main types of equations to describe the two mechanisms for precipitation hardening based on weak and strong precipitates. Weak precipitates can be sheared by dislocations while strong precipitates cannot, and therefore the dislocation must bow. First, it is important to consider the difference between these two different mechanisms in terms of the dislocation line tension that they make. [ 7 ] The line tension balance equation is:
Where ϕ c {\displaystyle \phi _{c}} is the radius of the dislocation at a certain stress. Strong obstacles have small ϕ c {\displaystyle \phi _{c}} due to the bowing of the dislocation. Still, decreasing obstacle strength will increase the ϕ c {\displaystyle \phi _{c}} and must be included in the calculation. L’ is also equal to the effective spacing between obstacles L. This leaves an equation for strong obstacles:
Considering weak particles, ϕ c {\displaystyle \phi _{c}} should be nearing 180 ∘ {\displaystyle 180^{\circ }} due to the dislocation line staying relatively straight through obstacles. Also , L’ will be:
which states the weak particle equation:
Now, consider the mechanisms for each regime:
Dislocation cutting through particles: For most strengthening at the early stage, it increases with ϵ 3 2 ( f r / b ) 1 2 {\displaystyle \epsilon ^{\tfrac {3}{2}}(fr/b)^{\tfrac {1}{2}}} , where ϵ {\displaystyle \epsilon } is a dimensionless mismatch parameter (for example, in coherency hardening, ϵ {\displaystyle \epsilon } is the fractional change of precipitate and matrix lattice parameter), f {\displaystyle f} is the volume fraction of precipitate, r {\displaystyle r} is the precipitate radius, and b {\displaystyle b} is the magnitude of the Burgers vector . According to this relationship, materials strength increases with increasing mismatch, volume fraction, and particle size, so that dislocation is easier to cut through particles with smaller radius.
For different types of hardening through cutting, governing equations are as following.
For coherency hardening,
τ c o h = 7 G | ϵ c o h | 3 2 ( f r / b ) 1 2 {\displaystyle \tau _{coh}=7G\left|\epsilon _{coh}\right|^{\frac {3}{2}}(fr/b)^{\frac {1}{2}}} ,
ϵ c o h = ( a p − a m ) / a m {\displaystyle \epsilon _{coh}=(a_{p}-a_{m})/a_{m}} ,
where τ {\displaystyle \tau } is increased shear stress, G {\displaystyle G} is the shear modulus of the matrix, a p {\displaystyle a_{p}} and a m {\displaystyle a_{m}} are the lattice parameter of the precipitate or the matrix.
For modulus hardening,
τ G p = 0.01 G ϵ G p 3 2 ( f r / b ) 1 2 {\displaystyle \tau _{G_{p}}=0.01G\epsilon _{G_{p}}^{\frac {3}{2}}(fr/b)^{\frac {1}{2}}} ,
ϵ G p = ( G p − G m ) / G m {\displaystyle \epsilon _{G_{p}}=\left(G_{p}-G_{m}\right)/G_{m}} ,
where G p {\displaystyle G_{p}} and G m {\displaystyle G_{m}} are the shear modulus of the precipitate or the matrix.
For chemical strengthening,
τ c h e m = 2 G ϵ c h 3 2 ( f r / b ) 1 2 {\displaystyle \tau _{chem}=2G\epsilon _{ch}^{\frac {3}{2}}(fr/b)^{\frac {1}{2}}} ,
ϵ c h = γ s / G r {\displaystyle \epsilon _{ch}=\gamma _{s}/Gr} ,
where γ s {\displaystyle \gamma _{s}} is the particle-matrix interphase surface energy.
For order strengthening,
τ o r d = 0.7 G ϵ o r d 3 2 ( f r / b ) 1 2 {\displaystyle \tau _{ord}=0.7G\epsilon _{ord}^{\frac {3}{2}}(fr/b)^{\frac {1}{2}}}
(low ϵ o r d {\displaystyle \epsilon _{ord}} , early stage precipitation), where the dislocations are widely separated;
τ o r d = 0.7 G ( ϵ o r d 3 2 ( f r / b ) 1 2 − 0.7 ϵ o r d f ) {\displaystyle \tau _{ord}=0.7G\left(\epsilon _{ord}^{\tfrac {3}{2}}(fr/b)^{\tfrac {1}{2}}-0.7\epsilon _{ord}f\right)}
(high ϵ o r d {\displaystyle \epsilon _{ord}} , early stage precipitation), where the dislocations are not widely separated; ϵ o r d = A P B E s G b {\displaystyle \epsilon _{ord}={\frac {APBE_{s}}{Gb}}} , where A P B E s {\displaystyle APBE_{s}} is anti-phase boundary energy.
Dislocations bowing around particles: When the precipitate is strong enough to resist dislocation penetration, dislocation bows and the maximum stress is given by the Orowan equation. Dislocation bowing, also called Orowan strengthening, [ 8 ] is more likely to occur when the particle density in the material is lower.
where τ {\displaystyle \tau } is the material strength, G {\displaystyle G} is the shear modulus, b {\displaystyle b} is the magnitude of the Burgers vector, L {\displaystyle L} is the distance between pinning points, and r {\displaystyle r} is the second phase particle radius. This governing equation shows that for dislocation bowing the strength is inversely proportional to the second phase particle radius r {\displaystyle r} , because when the volume fraction of the precipitate is fixed, the spacing L {\displaystyle L} between particles increases concurrently with the particle radius r {\displaystyle r} , therefore L − 2 r {\displaystyle L-2r} increases with r {\displaystyle r} .
These governing equations show that the precipitation hardening mechanism depends on the size of the precipitate particles. At small r {\displaystyle r} , cutting will dominate, while at large r {\displaystyle r} , bowing will dominate.
Looking at the plot of both equations, it is clear that there is a critical radius at which max strengthening occurs. This critical radius is typically 5-30 nm.
The Orowan strengthening model above neglects changes to the dislocations due to the bending. If bowing is accounted for, and the instability condition in the Frank-Read mechanism is assumed, the critical stress for dislocations bowing between pinning segments can be described as: [ 9 ]
τ c = A ( θ ) G b 2 π L ′ l n ( L ′ r ) {\displaystyle \tau _{c}=A(\theta ){\frac {Gb}{2\pi L^{'}}}ln\left({\frac {L^{'}}{r}}\right)}
where A {\displaystyle A} is a function of θ {\displaystyle \theta } , θ {\displaystyle \theta } is the angle between the dislocation line and the Burgers vector, L ′ {\displaystyle L^{'}} is the effective particle separation, b {\displaystyle b} is the Burgers vector, and r {\displaystyle r} is the particle radius.
Grain Size Control
Precipitates in a polycrystalline material can act as grain refiners if they are nucleated or located near grain boundaries, where they pin the grain boundaries as an alloy is solidifying and do not allow for a coarse microstructure. This is helpful, as finer microstructures often outperform (mechanical properties) coarser ones at room temperatures. In recent times nano-precipitates are being studied under creep conditions. These precipitates can also pin the grain boundary at higher temperatures, essentially acting as "friction". Another useful effect can be to impede grain-boundary sliding under diffusional creep conditions with very fine precipitates and if the precipitates are homogeneously dispersed in the matrix, then these same precipitates in the grains might interact with dislocations under creep dislocation creep conditions. [ 10 ]
Secondary Precipitates
Different precipitates, depending on their elemental compositions, can form under certain aging conditions that were not previously there. Secondary precipitates can arise from removing solutes from the matrix solid solution states. The control of this can be exploited to control the microstructure and influence properties. [ 11 ]
While significant effort has been made to develop new alloys, the experimental results take time and money to be implemented. One possible alternative is doing simulations with Density functional theory , that can take advantage of, in the context of precipitation hardening, the crystalline structure of precipitates and of the matrix and allow the exploration of a lot more alternatives than with experiments in the traditional form.
One strategy for doing these simulations is focusing on the ordered structures that can be found in many metal alloys, like the long-period stacking ordered (LPSO) structures that have been observed in numerous systems. [ 12 ] [ 13 ] [ 14 ] The LPSO structure is long packed layered configuration along one axis with some layers enriched with precipitated elements. This allows to exploit the symmetry of the supercells and it suits well with the currently available DFT methods. [ 15 ]
In this way, some researchers have developed strategies to screen the possible strengthening precipitates that allow decreasing the weight of some metal alloys. [ 16 ] For example, Mg-alloys have received progressive interest to replace Aluminum and Steel in the vehicle industry because it is one of the lighter structural metals. However, Mg-alloys show issues with low strength and ductility which have limited their use. To overcome this, the Precipitation hardening technique, through the addition of rare earth elements, has been used to improve the alloy strength and ductility. Specifically, the LPSO structures were found that are responsible for these increments, generating an Mg-alloy that exhibited high-yield strength: 610 MPa at 5% of elongation at room temperature. [ 17 ]
In this way, some researchers have developed strategies to Looking for cheaper alternatives than Rare Elements (RE) it was simulated a ternary system with Mg-Xl-Xs, where Xl and Xs correspond to atoms larger than and shorter than Mg, respectively. Under this study, it was confirmed more than 85 Mg-Re-Xs LPSO structures, showing the DFT ability to predict known LPSO ternary structures. Then they explore the 11 non-RE Xl elements and was found that 4 of them are thermodynamically stable. One of them is the Mg-Ca-Zn system that is predicted to form an LPSO structure. [ 18 ]
Following the previous DFT predictions, other investigators made experiments with the Mg-Zn-Y-Mn-Ca system and found that at 0.34%at Ca addition the mechanical properties of the system were enhanced due to the formation of LPSO-structures, achieving “a good balance of the strength and ductibility”. [ 19 ] | https://en.wikipedia.org/wiki/Precipitation_hardening |
In meteorology , the different types of precipitation often include the character, formation, or phase of the precipitation which is falling to ground level. There are three distinct ways that precipitation can occur. Convective precipitation is generally more intense, and of shorter duration, than stratiform precipitation. Orographic precipitation occurs when moist air is forced upwards over rising terrain and condenses on the slope, such as a mountain.
Precipitation can fall in either liquid or solid phases, is mixed with both, or transition between them at the freezing level . Liquid forms of precipitation include rain and drizzle and dew. Rain or drizzle which freezes on contact with a surface within a subfreezing air mass gains the preceding adjective "freezing", becoming the known freezing rain or freezing drizzle . Slush is a mixture of both liquid and solid precipitation. Frozen forms of precipitation include snow , ice crystals , ice pellets (sleet), hail , and graupel . Their respective intensities are classified either by rate of precipitation, or by visibility restriction.
Precipitation falls in many forms, or phases. They can be subdivided into:
The parenthesized letters are the shortened METAR codes for each phenomenon. [ 1 ]
Precipitation occurs when evapotranspiration takes place and local air becomes saturated with water vapor, and so can no longer maintain the level of water vapor in gaseous form, which creates clouds. This occurs when less dense moist air cools, usually when an air mass rises through the atmosphere to higher and cooler altitudes. However, an air mass can also cool without a change in altitude (e.g. through radiative cooling , or ground contact with cold terrain).
Convective precipitation occurs when air rises vertically through the (temporarily) self-sustaining mechanism of convection . Stratiform precipitation occurs when large air masses rise diagonally as larger-scale winds and atmospheric dynamics force them to move over each other. Orographic precipitation is similar, except the upwards motion is forced when a moving air mass encounters the rising slope of a landform such as a mountain ridge or slope.
Convection occurs when the Earth's surface, especially within a conditionally unstable or moist atmosphere , becomes heated more than its surroundings and in turn leading to significant evapotranspiration. Convective rain and light precipitation are the result of large convective clouds, for example cumulonimbus or cumulus congestus clouds. In the initial stages of this precipitation, it generally falls as showers with a smaller area and a rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited vertical and horizontal extent and do not conserve much water. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform and convective precipitation often both occur within the same complex of convection-generated cumulonimbus. [ 2 ] [ 3 ]
Graupel and hail indicate convection when either or both are present at the surface. They are indicative that some form of precipitation forms and exists at the freezing level, a varying point in the atmosphere in which the temperature is 0°C. [ 4 ] In mid-latitude regions, convective precipitation is often associated with cold fronts where it is often found behind the front, occasionally initiating a squall line .
Frontal precipitation is the result of frontal systems surrounding extratropical cyclones or lows, which form when warm and tropical air meets cooler, subpolar air. Frontal precipitation typically falls out from nimbostratus clouds. [ 5 ]
When masses of air with different densities (moisture and temperature characteristics) meet, the less dense warmer air overrides the more dense colder air. The warmer air is forced to rise and, if conditions are right, creates an effect of saturation and condensation, causing precipitation. In turn, precipitation can enhance the temperature and dewpoint contrast along a frontal boundary, creating more precipitation while the front lasts. Passing weather fronts often result in sudden changes in environmental temperature, and in turn the humidity and pressure in the air at ground level as different air masses switch the local weather.
Warm fronts occur where advancing warm air pushes out a previously extant cold air mass. The warm air overrides the cooler air and moves upward. Warm fronts are followed by extended periods of light rain and drizzle due to the fact that, after the warm air rises above the cooler air (which remains on the ground), it gradually cools due to the air's expansion while being lifted, which forms clouds and leads to precipitation.
Cold fronts occur when an advancing mass of cooler air dislodges and plows through a mass of warm air. This type of transition is sharper and faster than warm fronts, since cold air is more dense than warm air and sinks through in gravity's favor. Precipitation duration is often shorter and generally more intense than that which occurs ahead of warm fronts.
A wide variety of weather can be found along an occluded front , usually found near anticyclonic activity, but usually their passage is associated with a drying of the air mass.
Orographic or relief rainfall is caused when masses of air are forced up the side of elevated land formations, such as large mountains or plateaus (often referred to as an upslope effect). The lift of the air up the side of the mountain results in adiabatic cooling with altitude, and ultimately condensation and precipitation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds ), a more moist climate usually prevails on the windward side of a mountain than on the leeward (downwind) side, as wind carries moist air masses and orographic precipitation. Moisture is precipitated and removed by orographic lift, leaving drier air (see Foehn ) on the descending (generally warming), leeward side where a rain shadow is observed. [ 6 ]
In Hawaii , Mount Waiʻaleʻale ( Waiʻaleʻale ), on the island of Kauai, is notable for its extreme rainfall. It currently has the highest average annual rainfall on Earth, with approximately 460 inches (12,000 mm) per year. [ 7 ] Storm systems affect the region with heavy rains during winter, between October and March. Local climates vary considerably on each island due to their topography, divisible into windward ( Koʻolau ) and leeward ( Kona ) regions based upon location relative to the higher surrounding mountains. Windward sides face the east-to-northeast trade winds and receive much more clouds and rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover. [ 8 ] On the island of Oahu, high amounts of clouds and often rain can usually be observed around the windward mountain peaks, while the southern parts of the island (including most of Honolulu and Waikiki) receive dramatically less rainfall throughout the year.
In South America, the Andes mountain range blocks Pacific Ocean winds and moisture that arrives on the continent, resulting in a desert-like climate just downwind across western Argentina. [ 9 ] The Sierra Nevada range creates the same drying effect in North America, causing the Great Basin Desert , [ 10 ] Mojave Desert , and Sonoran Desert .
Precipitation is measured using a rain gauge , and more recently remote sensing techniques such as a weather radar . When classified according to the rate of precipitation, rain can be divided into categories. Light rain describes rainfall which falls at a rate of between a trace and 2.5 millimetres (0.098 in) per hour. Moderate rain describes rainfall with a precipitation rate of between 2.6 millimetres (0.10 in) and 7.6 millimetres (0.30 in) per hour. Heavy rain describes rainfall with a precipitation rate above 7.6 millimetres (0.30 in) per hour, and violent rain has a rate more than 50 millimetres (2.0 in) per hour. [ 11 ]
Snowfall intensity is classified in terms of visibility instead. When the visibility is over 1 kilometre (0.62 mi), snow is determined to be light. Moderate snow describes snowfall with visibility restrictions between .5 kilometres (0.31 mi) and 1 kilometre (0.62 mi). Heavy snowfall describes conditions when visibility is restricted below .5 kilometres (0.31 mi). [ 12 ] | https://en.wikipedia.org/wiki/Precipitation_types |
In meteorology , a precipitationshed is the upwind ocean and land surface that contributes evaporation to a given, downwind location's precipitation . The concept has been described as an "atmospheric watershed ". [ 1 ] The concept itself rests on a broad foundation of scholarly work examining the evaporative sources of rainfall. [ 2 ] [ 3 ] [ 4 ] Since its formal definition, the precipitationshed has become an element in water security studies, [ 5 ] examinations of sustainability, [ 6 ] and mentioned as a potentially useful tool for examining vulnerability of rainfall dependent ecosystems . [ 7 ]
In an effort to conceptualize the recycling of evaporation from a specific location to the spatially explicit region that receives this moisture, the precipitationshed concept was expanded to the evaporationshed. This expanded concept has been highlighted as particularly useful for providing a spatially explicit region for examining the impacts of significant land-use change, such as deforestation , irrigation , or agricultural intensification. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Precipitationshed |
Precise Point Positioning (PPP) is a global navigation satellite system (GNSS) positioning method that calculates very precise positions, with errors as small as a few centimeters under good conditions. PPP is a combination of several relatively sophisticated GNSS position refinement techniques that can be used with near-consumer-grade hardware to yield near-survey-grade results. PPP uses a single GNSS receiver, unlike standard RTK methods, which use a temporarily fixed base receiver in the field as well as a relatively nearby mobile receiver. PPP methods overlap somewhat with DGNSS positioning methods, which use permanent reference stations to quantify systemic errors.
PPP relies on two general sources of information: direct observables and ephemerides. [ 1 ]
Direct observables are data that the GPS receiver can measure on its own. One direct observable for PPP is carrier phase , i.e., not only the timing message encoded in the GNSS signal, but also whether the wave of that signal is going "up" or "down" at a given moment. Loosely speaking, phase can be thought of as the digits after the decimal point in the number of waves between a given GNSS satellite and the receiver. By itself, phase measurement cannot yield even an approximate position, but once other methods have narrowed down the position estimate to within a diameter corresponding to a single wavelength (roughly 20 cm), phase information can refine the estimate.
Another important direct observable is the differential delay between GNSS signals of different frequencies. This is useful because a major source of position error is variability in how GNSS signals are slowed in the ionosphere , which is affected relatively unpredictably by space weather . The ionosphere is dispersive , meaning that signals of different frequency are slowed by different amounts. By measuring the difference in the delays between signals of different frequencies, the receiver software (or later post-processing) can model and remove the delay at any frequency. This process is only approximate, and non-dispersive sources of delay remain (notably from water vapor moving around in the troposphere ), but it improves accuracy significantly.
Ephemerides are precise measurements of the GNSS satellites' orbits, made by the geodetic community (the International GNSS Service and other public and private organizations) with global networks of ground stations. Satellite navigation works on the principle that the satellites' positions at any given time are known, but in practice, micrometeoroid impacts, variation in solar radiation pressure , and so on mean that orbits are not perfectly predictable. The ephemerides that the satellites broadcast are earlier forecasts, up to a few hours old, and are less accurate (by up to a few meters) than carefully processed observations of where the satellites actually were. Therefore, if a GNSS receiver system stores raw observations, they can be processed later against a more accurate ephemeris than what was in the GNSS messages, yielding more accurate position estimates than what would be possible with standard realtime calculations. This post-processing technique has long been standard for GNSS applications that need high accuracy. More recently, projects such as APPS Archived 2021-04-21 at the Wayback Machine , the Automatic Precise Positioning Service of NASA JPL , have begun publishing improved ephemerides over the internet with very low latency. PPP uses these streams to apply in near realtime the same kind of correction that used to be done in post-processing.
Precise positioning is increasingly used in the fields including robotics , autonomous navigation , agriculture, construction, and mining. [ 2 ]
The major weaknesses of PPP, compared with conventional consumer GNSS methods, are that it takes more processing power, it requires an outside ephemeris correction stream, and it takes some time (up to tens of minutes) to converge to full accuracy. This makes it relatively unappealing for applications such as fleet tracking , where centimeter-scale precision is generally not worth the extra complexity, and more useful in areas like robotics, where there may already be an assumption of onboard processing power and frequent data transfer . | https://en.wikipedia.org/wiki/Precise_Point_Positioning |
PrecisionFDA (stylized precisionFDA ) [ 2 ] [ 3 ] is a secure, collaborative, high-performance computing platform that has established a growing community of experts around the analysis of biological datasets in order to advance precision medicine , inform regulatory science , and enable improvements in health outcomes. This cloud-based platform is developed and served by the United States Food and Drug Administration (FDA). [ 4 ] [ 5 ] PrecisionFDA connects experts, citizen scientists, and scholars from around the world and provides them with a library of computational tools, workflow features, and reference data. The platform allows researchers to upload and compare data against reference genomes , and execute bioinformatic pipelines. [ 6 ] The variant call file (VCF) comparator tool also enables users to compare their genetic test results to reference genomes. [ 7 ] The platform's code is open source and available on GitHub . [ 8 ] The platform also features a crowdsourcing model to sponsor community challenges in order to stimulate the development of innovative analytics that inform precision medicine and regulatory science. Community members from around the world come together to participate in scientific challenges, solving problems that demonstrate the effectiveness of their tools, testing the capabilities of the platform, sharing their results, and engaging the community in discussions. Globally, precisionFDA has more than 5,000 users.
The precisionFDA team collaborates with multiple FDA Centers, the National Institutes of Health, and other government agencies to support the vision and intent of the American Innovation & Competitiveness Act and the 21st Century Cures Act .
President Barack Obama announced the formation of the Precision Medicine Initiative during the State of the Union Address in January 2015. [ 6 ] [ 9 ] In August 2015, the FDA announced the launch of precisionFDA as a part of the initiative. [ 10 ] [ 11 ] In November 2015, the FDA launched a "closed beta" version of the platform, giving select groups and individuals access to the platform. [ 6 ] [ 9 ] An open beta version of the platform was released in December 2015. [ 12 ] [ 13 ] In February 2016, the FDA announced the first precisionFDA challenge, the Consistency Challenge , which tasked users with testing the reliability and reproducibility of gene mapping and variant calling tools. [ 14 ] [ 15 ] The Truth Challenge followed the Consistency Challenge and asked participants to assess the accuracy of bioinformatics tools for identifying genetic variants. The Hidden Treasures – Warm Up challenge evaluated variant calling pipelines on a targeted set of in silico injected variants. The CFSAN Pathogen Detection Challenge evaluated bioinformatics pipelines for accurate and rapid detection of foodborne pathogens in metagenomics samples. The CDRH ID-NGS Diagnostics Biothreat Challenge addressed the issue of early detection during pathogen outbreaks by evaluating algorithms for identifying and quantifying emerging pathogens, such as the Ebola virus, from their genomic fingerprints. Subsequent challenges expanded beyond genomics into multi-omics and other data types. The NCI-CPTAC Multi-omics Enabled Sample Mislabeling Correction Challenge addressed the issue of sample mislabeling, which contributes to irreproducible research results and invalid conclusions, by evaluating algorithms for accurate detection and correction of mislabeled samples using multi-omics to enable Rigor and Reproducibility in biomedical research. [ 16 ] The Brain Cancer Predictive Modeling and Biomarker Discovery Challenge , run in collaboration with Georgetown University, asked participants to develop machine learning (ML) and artificial intelligence (AI) models to identify biomarkers and predict brain cancer patient outcomes using gene expression, DNA copy number, and clinical data. The Gaining New Insights by Detecting Adverse Event Anomalies Using FDA Open Data Challenge engaged data scientists to use unsupervised ML and AI techniques to identify anomalies in FDA adverse events, regulated product substances, and clinical trials data, essential for improving the mission of FDA. The Truth Challenge V2 assessed variant calling pipeline performance in difficult-to-map regions, segmental duplications, and Major Histocompatibility Complex (HMC) using Genome in a Bottle human genome benchmarks. The COVID-19 Risk Factor Modeling Challenge , in collaboration with the Veterans Health Administration , called upon the scientific and analytics community to develop and evaluate computational models to predict COVID-19 related health outcomes in Veterans. In total, ten community challenges have been completed on precisionFDA, which have generated a total of 562 responses from 240 participants. PrecisionFDA challenges have led to meaningful regulatory science advancements, including published best practices for benchmarking germline small-variant calls in human genomes. [ 17 ] In addition, the challenges have incentivized the development and benchmarking of novel computational pipelines, including a pipeline that uses deep neural networks to identify genetic variants. [ 18 ]
In addition to challenges, in-person and virtual app-a-thon events, which promote the development and sharing of apps and tools, are hosted on precisionFDA. In August 2016, precisionFDA launched App-a-Thon in a Box , which aimed to encourage the creation and sharing of Next Generation Sequencing (NGS) apps and executable Linux command wrappers. The most recent app-a-thon, the BioCompute Object App-a-thon , sought to improve the reproducibility of bioinformatics pipelines. Participants were asked to create BioCompute Objects (BCOs) , a standardized schema for reporting computational scientific workflows, and apps to develop BCOs and check their conformance to BioCompute Specifications.
In April 2016, precisionFDA was awarded the top prize in the Informatics category at the Bio IT World Best Practices Awards. [ 19 ] In 2018, the DNAnexus platform, which is leveraged by precisionFDA, was granted Authority to Operate (ATO) by Health and Human Services (HHS) for FedRAMP Moderate . [ 20 ] In addition, the precisionFDA team received an FDA Commissioner’s Special Citation Award in 2019 for outstanding achievements and collaboration in the development of the precisionFDA platform promoting innovative regulatory science research to modernize the regulation of NGS-based genomic tests. In 2019, precisionFDA received a FedHealthIT Innovation Award and transitioned from a beta to a production release state.
PrecisionFDA is an open-source, cloud-based platform for collaborating and testing bioinformatics pipelines and multi-omics data. [ 4 ] [ 5 ] PrecisionFDA is available to all innovators in the field of multi-omics, including members of the scientific community, diagnostic test providers, pharmaceutical and biotechnology companies, and other constituencies such as advocacy groups and patients. The platform allows researchers to upload and analyze data from both their own and other groups’ studies. [ 8 ] [ 9 ] [ 21 ] The platform hosts files such as reference genomes and genomic data, comparisons (quantification of similarities between sets of genomic variants), and apps (bioinformatics pipelines) that scientists and researchers can upload and work with. [ 6 ] [ 22 ] The precisionFDA virtual lab environment provides users with their own secure private area to conduct their research, and with configurable shared spaces where the FDA and external parties can share data and tools. For challenge sponsors, the precisionFDA platform provides a comprehensive challenge development framework enabling presentation of challenge assets, grading of submissions, and publication of results. To get involved, visit precision.fda.gov and request access to become a member of a growing community that is informing the evolution of precision medicine, advancing regulatory science, and enabling improvements in health outcomes. | https://en.wikipedia.org/wiki/PrecisionFDA |
Precision BioSciences, Inc. is a publicly traded American clinical stage gene editing company headquartered in Durham, North Carolina . [ 3 ] Founded in 2006, Precision is focused on developing both in vivo and ex vivo gene editing therapies using its proprietary "ARCUS" genome editing platform. [ 4 ]
Derek Jantz and Jeff Smith met as postdoctoral fellows at Duke University , [ 5 ] and in March 2006, they founded Precision BioSciences along with Matt Kane, a student at the Duke Fuqua School of Business at the time. [ 3 ] The company went through two rounds of early funding: a Series A round led by venBio to fund development of the genome editing platform, [ 6 ] and Series B financing to fund product development efforts. [ 7 ] [ 8 ] The company completed its initial public offering in 2019, and trades under the Nasdaq ticker DTIL. [ 9 ] [ 10 ]
Precision entered into a partnership with Eli Lilly in November 2020 to use ARCUS editing for up to six in vivo targets connected to genetic disorders, [ 11 ] beginning with Duchenne muscular dystrophy . [ 12 ] In September 2021, Precision announced two more collaborations, with UK biotechnology company Tiziana Life Sciences to explore using foralumab to aid chimeric antigen receptor (CAR) T cell therapy, [ 13 ] and with Philadelphia-based iECURE to advance candidates into clinical trials and investigate how ARCUS can help treat liver diseases. [ 11 ] Michael Amoroso, the former CEO of cell and gene therapy developer Abeona Therapeutics, succeeded Matt Kane as President and CEO in October 2021. [ 3 ] That December, Precision announced its entry into an agreement with a syndicate of investors led by ACCELR8 to spin off its subsidiary, Elo Life Systems, and create an independent company focused on food and agriculture business. [ 14 ]
Precision BioSciences' proprietary technology is the ARCUS platform and ARCUS nucleases . [ 4 ] [ 8 ] ARCUS nucleases are based on a naturally occurring genome editing enzyme, I-CreI , a homing endonuclease that evolved in the algae Chlamydomonas reinhardtii [ 4 ] [ 12 ] to make highly specific cuts and DNA insertions in cellular DNA. [ 15 ] The nuclease is able to deactivate itself once gene edits are made, which minimizes potential off-targeting. [ 12 ] [ 16 ] An ARCUS nuclease is also much smaller in size than CRISPR spCas9. [ 17 ] It can use either adeno-associated virus (AAV) vectors or lipid nanoparticles (LNPs) for delivery to specific tissues and cells. [ 18 ] Precision has used ARCUS nucleases to develop multiple ex vivo allogeneic , "off-the-shelf" CAR T cell immunotherapies in early-stage clinical trials. [ 4 ] [ 19 ] The company also uses ARCUS for in vivo gene editing programs, [ 4 ] some of which are in preclinical development as of May 2022. [ 20 ] [ 18 ]
Similar to I-CreI, ARCUS nucleases generate a unique cleavage site in DNA that is characterized by four-base-pair, 3' overhangs. [ 4 ] ARCUS nucleases can perform a range of complex edits, including gene insertion , gene excision , and gene repair . [ 8 ] [ 15 ] ARCUS nucleases are able to enact all editing operations in one step, which enables efficient multiplexing of edits. [ 19 ]
Precision has demonstrated some additional applications of the ARCUS platform, including treating ornithine transcarbamylase deficiency in newborn nonhuman primates and in the use of a LNP to treat chronic Hepatitis B . [ 15 ] The company is also pursuing PBGENE-PCSK9, a candidate to treat familial hypercholesterolemia , and PBGENE-PH1, a candidate to treat primary hyperoxaluria type 1. [ 21 ] [ 22 ]
Precision is in the process of developing multiple candidates targeting non-Hodgkin lymphoma , acute lymphoblastic leukemia (ALL), [ 4 ] [ 19 ] and multiple myeloma . [ 23 ] The company's lead candidate targeting CD19 , PBCAR0191, [ 19 ] received orphan drug designation from the U.S. Food and Drug Administration for the treatment of ALL and mantle cell lymphoma , an aggressive subtype of non-Hodgkin lymphoma, as well as fast track designation for the treatment of B-cell ALL. [ 23 ] PBCAR0191 began its Phase 1/2a clinical trial of adult subjects in March 2019. [ 24 ] [ 23 ] In June 2022, Precision reported a 100% response rate, a 73% complete response rate, and a 50% durable response rate, and the company sought to increase enrollment in the study. [ 25 ] [ 26 ]
Precision is also developing PBCAR19B as an anti-CD19 [ 26 ] stealth cell candidate that employs a single gene edit to knock down beta-2 microglobulin , for which a Phase 1 study began in June 2021. [ 27 ] [ 28 ] The company is also conducting a Phase 1/2a clinical trial evaluating PBCAR269A, its investigational allogeneic B-cell maturation antigen -targeted CAR T cell therapy, for the treatment of multiple myleloma. [ 26 ] PBCAR269A began its Phase 1 trials in April 2020, [ 29 ] and as of July 2022 had moved onto recruitment for its Phase 1/2a study, which features PBCAR269A in combination with nirogacestat , a gamma secretase inhibitor. [ 26 ] In 2020, the FDA granted fast track designation to PBCAR269A for the treatment of relapsed or refractory multiple myeloma, having previously provided orphan drug designation. [ 23 ] | https://en.wikipedia.org/wiki/Precision_BioSciences |
The AN/PSN-11 Precision Lightweight GPS Receiver ( PLGR , colloquially " plugger ") is a ruggedized , hand-held, single-frequency GPS receiver fielded by the United States Armed Forces . It incorporates the Precise Positioning Service — Security Module (PPS-SM) to access the encrypted P(Y)-code GPS signal .
Introduced in January 1990, and extensively fielded until 2004 when it was replaced by its successor, the Defense Advanced GPS Receiver (DAGR). In that time period more than 165,000 PLGRs were procured worldwide, and despite being superseded by the DAGR, large numbers remain in unit inventories and it continues to be the most widely used GPS receiver in the United States military.
The PLGR measures 9.5 by 4.1 by 2.6 inches (24 cm × 10 cm × 7 cm) and weighs 2.75 pounds (1.25 kg) with batteries. It was originally delivered to the United States military with a six-year warranty; however, this was extended to ten years in June 2000.
This computer hardware article is a stub . You can help Wikipedia by expanding it .
This military -related article is a stub . You can help Wikipedia by expanding it .
This article related to radio communications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Precision_Lightweight_GPS_Receiver |
Precision agriculture ( PA ) is a management strategy that gathers, processes and analyzes temporal, spatial and individual plant and animal data and combines it with other information to support management decisions according to estimated variability for improved resource use efficiency, productivity, quality, profitability and sustainability of agricultural production.” [ 2 ] It is used in both crop and livestock production . [ 3 ] Precision agriculture often employs technologies to automate agricultural operations , improving their diagnosis, decision-making or performing. [ 4 ] [ 5 ] The goal of precision agriculture research is to define a decision support system for whole farm management with the goal of optimizing returns on inputs while preserving resources. [ 6 ] [ 7 ]
Among these many approaches is a phytogeomorphological approach which ties multi-year crop growth stability/characteristics to topological terrain attributes. The interest in the phytogeomorphological approach stems from the fact that the geomorphology component typically dictates the hydrology of the farm field. [ 8 ] [ 9 ]
The practice of precision agriculture has been enabled by the advent of GPS and GNSS . The farmer's and/or researcher's ability to locate their precise position in a field allows for the creation of maps of the spatial variability of as many variables as can be measured (e.g. crop yield, terrain features/topography, organic matter content, moisture levels, nitrogen levels, pH, EC, Mg, K, and others). [ 10 ] Similar data is collected by sensor arrays mounted on GPS-equipped combine harvesters . These arrays consist of real-time sensors that measure everything from chlorophyll levels to plant water status, along with multispectral imagery. [ 11 ] This data is used in conjunction with satellite imagery by variable rate technology (VRT) including seeders, sprayers, etc. to optimally distribute resources. However, recent technological advances have enabled the use of real-time sensors directly in soil, which can wirelessly transmit data without the need of human presence. [ 12 ] [ 13 ] [ 14 ]
Precision agriculture can benefit from unmanned aerial vehicles , that are relatively inexpensive and can be operated by novice pilots. These agricultural drones [ 15 ] can be equipped with multispectral or RGB cameras to capture many images of a field that can be stitched together using photogrammetric methods to create orthophotos . These multispectral images contain multiple values per pixel in addition to the traditional red, green blue values such as near infrared and red-edge spectrum values used to process and analyze vegetative indexes such as NDVI maps. [ 16 ] These drones are capable of capturing imagery and providing additional geographical references such as elevation, which allows software to perform map algebra functions to build precise topography maps. These topographic maps can be used to correlate crop health with topography, the results of which can be used to optimize crop inputs such as water, fertilizer or chemicals such as herbicides and growth regulators through variable rate applications.
Precision agriculture education
The agricultural industry, including teachers, is still in the relatively early stages of precision agriculture technologies. Precision agriculture has led many to believe that industry-related technologies should lead research and education instead of the other way around. In classrooms, conferences, workshops, and field days, educators of precision agriculture have struggled to keep up with the number of questions being asked. Training people to use precision agriculture technologies has proven difficult, in contrast to teaching the fundamental ideas and concepts, which have been intuitive and rather straightforward. [ 17 ]
Precision agriculture is a key component of the third wave of modern agricultural revolutions . The first agricultural revolution was the increase of mechanized agriculture , from 1900 to 1930. Each farmer produced enough food to feed about 26 people during this time. [ 18 ] The 1960s prompted the Green Revolution with new methods of genetic modification, which led to each farmer feeding about 156 people. [ 18 ] It is expected that by 2050, the global population will reach about 9.6 billion, and food production must effectively double from current levels in order to feed every mouth. With new technological advancements in the agricultural revolution of precision farming, each farmer will be able to feed 265 people on the same acreage. [ 18 ]
The first wave of the precision agricultural revolution came in the forms of satellite and aerial imagery, weather prediction, variable rate fertilizer application, and crop health indicators. [ 19 ] The second wave aggregates the machine data for even more precise planting, topographical mapping, and soil data. [ 20 ]
Precision agriculture aims to optimize field-level management with regard to:
Precision agriculture also provides farmers with a wealth of information to:
Prescriptive planting is a type of farming system that delivers data-driven planting advice that can determine variable planting rates to accommodate varying conditions across a single field, in order to maximize yield. It has been described as " Big Data on the farm." Monsanto , DuPont and others are launching this technology in the US. [ 21 ] [ 22 ]
Precision agriculture uses many tools, but some of the basics include tractors, combines, sprayers, planters, and diggers, which are all considered auto-guidance systems. The small devices on the equipment that use GIS (geographic information system) are what makes precision agriculture what it is; the GIS system can be thought of as the “brain”. To be able to use precision agriculture, the equipment needs to be wired with the right technology and data systems. More tools include Variable rate technology (VRT) , Global positioning system , Geographical information system , Grid sampling , and remote sensors. [ 23 ]
Geolocating a field enables the farmer to overlay information gathered from the analysis of soils and residual nitrogen, and information on previous crops and soil resistivity. Geolocation is done in two ways
Intra and inter-field variability may result from a number of factors. These include climatic conditions ( hail , drought, rain, etc.), soils (texture, depth, nitrogen levels), cropping practices ( no-till farming ), weeds , and disease.
Permanent indicators—chiefly soil indicators—provide farmers with information about the main environmental constants.
Point indicators allow them to track a crop's status, i.e., to see whether diseases are developing, if the crop is suffering from water stress , nitrogen stress, or lodging, whether it has been damaged by ice, and so on.
This information may come from weather stations and other sensors (soil electrical resistivity, detection with the naked eye, satellite imagery, etc.). Soil resistivity measurements combined with soil analysis make it possible to measure moisture content . Soil resistivity is also a relatively simple and cheap measurement. [ 24 ]
Using soil maps , farmers can pursue two strategies to adjust field inputs:
Decisions may be based on decision-support models (crop simulation models and recommendation models) based on big data , but in the final analysis it is up to the farmer to decide in terms of business value and impacts on the environment - a role being taken over by artificial intelligence (AI) systems based on machine learning and artificial neural networks .
It is important to realize why PA technology is or is not adopted, "for PA technology adoption to occur the farmer has to perceive the technology as useful and easy to use. It might be insufficient to have positive outside data on the economic benefits of PA technology as perceptions of farmers have to reflect these economic considerations." [ 28 ]
New information and communication technologies make field-level crop management more operational and easier to achieve for farmers.
Application of crop management decisions calls for agricultural equipment that supports variable-rate technology ( VRT ), for example varying seed density along with the variable-rate application (VRA) of nitrogen and phytosanitary products. [ 29 ]
Precision agriculture uses technology on agricultural equipment (e.g. tractors, sprayers, harvesters, etc.):
The concept of precision agriculture first emerged in the United States in the early 1980s. In 1985, researchers at the University of Minnesota varied lime inputs in crop fields. It was also at this time that the practice of grid sampling appeared (applying a fixed grid of one sample per hectare). Towards the end of the 1980s, this technique was used to derive the first input recommendation maps for fertilizers and pH corrections. The use of yield sensors developed from new technologies, combined with the advent of GPS receivers, has been gaining ground ever since. Today, such systems cover several million hectares.
In the American Midwest (US), it is associated not with sustainable agriculture but with mainstream farmers who are trying to maximize profits by spending money only in areas that require fertilizer. This practice allows the farmer to vary the rate of fertilizer across the field according to the need identified by GPS guided Grid or Zone Sampling. Fertilizer that would have been spread in areas that do not need it can be placed in areas in need, thereby optimizing its use.
Around the world, precision agriculture developed at a varying pace. Precursor nations were the United States, Canada and Australia. In Europe, the United Kingdom was the first to go down this path, followed closely by France, where it first appeared in 1997–1998. In Latin America the leading country is Argentina , where it was introduced in the middle 1990s with the support of the National Agricultural Technology Institute . Brazil established a state-owned enterprise, Embrapa , to research and develop sustainable agriculture. The development of GPS and variable-rate spreading techniques helped to anchor precision farming [ 30 ] management practices. Today, less than 10% of France's farmers are equipped with variable-rate systems. Uptake of GPS is more widespread, but this hasn't stopped them using precision agriculture services, which supplies field-level recommendation maps. [ 31 ]
While digital technologies can transform the landscape of agricultural machinery, making mechanization both more precise and more accessible, non-mechanized production is still dominant in many low- and middle-income countries, especially in sub-Saharan Africa. [ 4 ] [ 5 ] Research on precision agriculture for non-mechanized production is increasing and so is its adoption. [ 32 ] [ 33 ] [ 34 ] Examples include the AgroCares hand-held soil scanner, uncrewed aerial vehicle (UAV) services (also known as drones), and GNSS to map field boundaries and establish land tenure. [ 35 ] However, it is not clear how many agricultural producers actually use digital technologies. [ 35 ] [ 36 ]
Precision livestock farming supports farmers in real-time by continuously monitoring and controlling animal productivity, environmental impacts, and health and welfare parameters. [ 37 ] Sensors attached to animals or to barn equipment operate climate control and monitor animals’ health status, movement and needs. For example, cows can be tagged with the electronic identification (EID) that allows a milking robot to access a database of udder coordinates for specific cows. [ 38 ] Global automatic milking system sales have increased over recent years, [ 39 ] but adoption is likely mostly in Northern Europe, [ 40 ] and likely almost absent in low- and middle-income countries. [ 41 ] Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce. [ 4 ] [ 5 ]
The economic and environmental benefits of precision agriculture have also been confirmed in China, but China is lagging behind countries such as Europe and the United States because the Chinese agricultural system is characterized by small-scale family-run farms, which makes the adoption rate of precision agriculture lower than other countries. Therefore, China is trying to better introduce precision agriculture technology into its own country and reduce some risks, paving the way for China's technology to develop precision agriculture in the future. [ 42 ]
In December 2014, the Russian President made an address to the Russian Parliament where he called for a National Technology Initiative (NTI). It is divided into subcomponents such as the FoodNet initiative. The FoodNet initiative contains a set of declared priorities, such as precision agriculture. This field is of special interest to Russia as an important tool in developing elements of the bioeconomy in Russia. [ 43 ] [ 44 ]
Precision agriculture, as the name implies, means the application of precise and correct amounts of inputs like water, fertilizer, pesticides, etc. at the correct time to the crop to increase its productivity and maximize its yields. Precision agriculture management practices can significantly reduce the amount of nutrient and other crop inputs used while boosting yields. [ 45 ] Farmers thus obtain a return on their investment by saving on water, pesticide, and fertilizer costs.
The second, larger-scale benefit of targeting inputs concerns environmental impacts. Applying the right amount of chemicals in the right place and at the right time benefits crops, soils and groundwater, and thus the entire crop cycle. [ 46 ] Consequently, precision agriculture has become a cornerstone of sustainable agriculture , since it respects crops, soils and farmers. Sustainable agriculture seeks to assure a continued supply of food within the ecological, economic and social limits required to sustain production in the long term.
A 2013 article tried to show that precision agriculture can help farmers in developing countries like India. [ 47 ]
Precision agriculture reduces the pressure of agriculture on the environment by increasing the efficiency of machinery and putting it into use. For example, the use of remote management devices such as GPS reduces fuel consumption for agriculture, while variable rate application of nutrients or pesticides can potentially reduce the use of these inputs, thereby saving costs and reducing harmful runoff into the waterways. [ 48 ]
GPS also reduces the amount of compaction to the ground by following previously made guidance lines. This will also allow for less time in the field and reduce the environmental impact of the equipment and chemicals.
Precision agriculture produces large quantities of varied sensing data which creates an opportunity to adapt and reuse such data for archaeology and heritage work, enhancing understanding of archaeology in contemporary agricultural landscapes. [ 49 ]
Precision agriculture is an application of breakthrough digital farming technologies. Over $4.6 billion has been invested in agriculture tech companies—sometimes called agtech. [ 18 ]
Self-steering tractors have existed for some time now, as John Deere equipment works like a plane on autopilot . The tractor does most of the work, with the farmer stepping in for emergencies. [ 46 ] Technology is advancing towards driverless machinery programmed by GPS to spread fertilizer or plow land. Autonomy of technology is driven by the demanding need for diagnoses, often difficult to accomplish solely by hands-on farmer-operated machinery. In many instances of high rates of production, manual adjustments cannot be sustained. [ 50 ] Other innovations include, partly solar powered, machines/robots that identify weeds and precisely kill them with a dose of a herbicide or lasers . [ 46 ] [ 51 ] [ 52 ]
Agricultural robots , also known as AgBots, already exist, but advanced harvesting robots are being developed to identify ripe fruits, adjust to their shape and size, and carefully pluck them from branches. [ 53 ]
Drone and satellite technology are used in precision farming. This often occurs when drones take high-quality images while satellites capture the bigger picture. Aerial photography from light aircraft can be combined with data from satellite records to predict future yields based on the current level of field biomass . Aggregated images can create contour maps to track where water flows, determine variable-rate seeding, and create yield maps of areas that were more or less productive. [ 46 ]
The Internet of things is the network of physical objects outfitted with electronics that enable data collection and aggregation. IoT comes into play with the development of sensors [ 54 ] and farm-management software. For example, farmers can spectroscopically measure nitrogen, phosphorus, and potassium in liquid manure , which is notoriously inconsistent. [ 46 ] They can then scan the ground to see where cows have already urinated and apply fertilizer to only the spots that need it. This cuts fertilizer use by up to 30%. [ 53 ] Moisture sensors [ 55 ] in the soil determine the best times to remotely water plants. The irrigation systems can be programmed to switch which side of the tree trunk they water based on the plant's need and rainfall. [ 46 ]
Innovations are not just limited to plants—they can be used for the welfare of animals. Cattle can be outfitted with internal sensors to keep track of stomach acidity and digestive problems. External sensors track movement patterns to determine the cow's health and fitness, sense physical injuries, and identify the optimal times for breeding. [ 46 ] All this data from sensors can be aggregated and analyzed to detect trends and patterns.
As another example, monitoring technology can be used to make beekeeping more efficient. Honeybees are of significant economic value and provide a vital service to agriculture by pollinating a variety of crops. Monitoring of a honeybee colony's health via wireless temperature, humidity, and CO 2 sensors helps to improve the productivity of bees, and to read early warnings in the data that might threaten the very survival of an entire hive. [ 56 ]
Smartphone and tablet applications are becoming increasingly popular in precision agriculture. Smartphones come with many useful applications already installed, including the camera, microphone, GPS, and accelerometer. There are also applications made dedicated to various agriculture applications such as field mapping, tracking animals, obtaining weather and crop information, and more. They are easily portable, affordable, and have high computing power. [ 57 ]
Machine learning is commonly used in conjunction with drones, robots, and internet of things devices. It allows for the input of data from each of these sources. The computer then processes this information and sends the appropriate actions back to these devices. This allows for robots to deliver the perfect amount of fertilizer or for IoT devices to provide the perfect quantity of water directly to the soil. [ 58 ] Machine learning may also provide predictions to farmers at the point of need, such as the contents of plant-available nitrogen in soil , to guide fertilization planning. [ 59 ] As more agriculture becomes ever more digital, machine learning will underpin efficient and precise farming with less manual labour.
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 ( license statement/permission ). Text taken from In Brief to The State of Food and Agriculture 2022 – Leveraging automation in agriculture for transforming agrifood systems , FAO, FAO.
Media related to Precision farming at Wikimedia Commons | https://en.wikipedia.org/wiki/Precision_agriculture |
Personalized medicine , also referred to as precision medicine , is a medical model that separates people into different groups —with medical decisions , practices , interventions and/or products being tailored to the individual patient based on their predicted response or risk of disease .The terms personalized medicine, precision medicine, stratified medicine and P4 medicine are used interchangeably to describe this concept, though some authors and organizations differentiate between these expressions based on particular nuances. P4 is short for "predictive, preventive, personalized and participatory".
While the tailoring of treatment to patients dates back at least to the time of Hippocrates , the usage of the term has risen in recent years thanks to the development of new diagnostic and informatics approaches that provide an understanding of the molecular basis of disease , particularly genomics . This provides a clear biomarker on which to stratify related patients. [ 1 ] [ 2 ] [ 3 ]
Among the 14 Grand Challenges for Engineering , an initiative sponsored by National Academy of Engineering (NAE), personalized medicine has been identified as a key and prospective approach to "achieve optimal individual health decisions", therefore overcoming the challenge to " engineer better medicines ". [ 4 ] [ 5 ]
In personalised medicine, diagnostic testing is often employed for selecting appropriate and optimal therapies based on the patient's genetics or their other molecular or cellular characteristics. [ citation needed ] The use of genetic information has played a major role in certain aspects of personalized medicine (e.g. pharmacogenomics ), and the term was first coined in the context of genetics, though it has since broadened to encompass all sorts of personalization measures, [ 6 ] including the use of proteomics , [ 7 ] imaging analysis, nanoparticle -based theranostics, [ 8 ] among others.
Precision medicine is a medical model that proposes the customization of healthcare , with medical decisions, treatments, practices, or products being tailored to a subgroup of patients, instead of a one‐drug‐fits‐all model. [ 9 ] [ 10 ] In precision medicine, diagnostic testing is often employed for selecting appropriate and optimal therapies based on the context of a patient's genetic content or other molecular or cellular analysis. [ 11 ] Tools employed in precision medicine can include molecular diagnostics , imaging, and analytics. [ 10 ] [ 12 ]
Precision medicine and personalized medicine (also individualized medicine) are analogous, applying a person's genetic profile to guide clinical decisions about the prevention, diagnosis, and treatment of a disease. [ 13 ] Personalized medicine is established on discoveries from the Human Genome Project . [ 13 ]
In explaining the distinction from the similar term of personalized medicine , the United States President's Council of Advisors on Science and Technology writes: [ 14 ]
Precision medicine refers to the tailoring of medical treatment to the individual characteristics of each patient. It does not literally mean the creation of drugs or medical devices that are unique to a patient, but rather the ability to classify individuals into subpopulations that differ in their susceptibility to a particular disease, in the biology or prognosis of those diseases they may develop, or in their response to a specific treatment. Preventive or therapeutic interventions can then be concentrated on those who will benefit, sparing expense and side effects for those who will not. [ 14 ]
The use of the term "precision medicine" can extend beyond treatment selection to also cover creating unique medical products for particular individuals—for example, "...patient-specific tissue or organs to tailor treatments for different people." [ 15 ] Hence, the term in practice has so much overlap with "personalized medicine" that they are often used interchangeably, even though the latter is sometimes misinterpreted as involving a unique treatment for each individual. [ 16 ]
Every person has a unique variation of the human genome . [ 17 ] Although most of the variation between individuals has no effect on health, an individual's health stems from genetic variation with behaviors and influences from the environment. [ 18 ] [ 11 ]
Modern advances in personalized medicine rely on technology that confirms a patient's fundamental biology, DNA , RNA , or protein , which ultimately leads to confirming disease. For example, personalised techniques such as genome sequencing can reveal mutations in DNA that influence diseases ranging from cystic fibrosis to cancer. Another method, called RNA-seq , can show which RNA molecules are involved with specific diseases. Unlike DNA, levels of RNA can change in response to the environment. Therefore, sequencing RNA can provide a broader understanding of a person's state of health. Recent studies have linked genetic differences between individuals to RNA expression , [ 19 ] translation, [ 20 ] and protein levels. [ 21 ]
The concepts of personalised medicine can be applied to new and transformative approaches to health care. Personalised health care is based on the dynamics of systems biology and uses predictive tools to evaluate health risks and to design personalised health plans to help patients mitigate risks, prevent disease and to treat it with precision when it occurs. The concepts of personalised health care are receiving increasing acceptance with the Veterans Administration committing to personalised, proactive patient driven care for all veterans. [ 22 ] In some instances personalised health care can be tailored to the markup of the disease causing agent instead of the patient's genetic markup; examples are drug resistant bacteria or viruses. [ 23 ]
Precision medicine often involves the application of panomic analysis and systems biology to analyze the cause of an individual patient's disease at the molecular level and then to utilize targeted treatments (possibly in combination) to address that individual patient's disease process. The patient's response is then tracked as closely as possible, often using surrogate measures such as tumor load (versus true outcomes, such as five-year survival rate), and the treatment finely adapted to the patient's response. [ 24 ] [ 25 ] The branch of precision medicine that addresses cancer is referred to as "precision oncology". [ 26 ] [ 27 ] The field of precision medicine that is related to psychiatric disorders and mental health is called "precision psychiatry." [ 28 ] [ 29 ]
Inter-personal difference of molecular pathology is diverse, so as inter-personal difference in the exposome , which influence disease processes through the interactome within the tissue microenvironment , differentially from person to person. As the theoretical basis of precision medicine, the "unique disease principle" [ 30 ] emerged to embrace the ubiquitous phenomenon of heterogeneity of disease etiology and pathogenesis . The unique disease principle was first described in neoplastic diseases as the unique tumor principle. [ 31 ] As the exposome is a common concept of epidemiology , precision medicine is intertwined with molecular pathological epidemiology , which is capable of identifying potential biomarkers for precision medicine. [ 32 ]
In order for physicians to know if a mutation is connected to a certain disease, researchers often do a study called a " genome-wide association study " (GWAS). A GWAS study will look at one disease, and then sequence the genome of many patients with that particular disease to look for shared mutations in the genome. Mutations that are determined to be related to a disease by a GWAS study can then be used to diagnose that disease in future patients, by looking at their genome sequence to find that same mutation. The first GWAS, conducted in 2005, studied patients with age-related macular degeneration (ARMD). [ 33 ] It found two different mutations, each containing only a variation in only one nucleotide (called single nucleotide polymorphisms , or SNPs), which were associated with ARMD. GWAS studies like this have been very successful in identifying common genetic variations associated with diseases. As of early 2014, over 1,300 GWAS studies have been completed. [ 34 ]
Multiple genes collectively influence the likelihood of developing many common and complex diseases. [ 18 ] Personalised medicine can also be used to predict a person's risk for a particular disease, based on one or even several genes. This approach uses the same sequencing technology to focus on the evaluation of disease risk, allowing the physician to initiate preventive treatment before the disease presents itself in their patient. For example, if it is found that a DNA mutation increases a person's risk of developing Type 2 Diabetes , this individual can begin lifestyle changes that will lessen their chances of developing Type 2 Diabetes later in life. [ citation needed ]
The ability to provide precision medicine to patients in routine clinical settings depends on the availability of molecular profiling tests, e.g. individual germline DNA sequencing. [ 35 ] While precision medicine currently individualizes treatment mainly on the basis of genomic tests (e.g. Oncotype DX [ 36 ] ), several promising technology modalities are being developed, from techniques combining spectrometry and computational power to real-time imaging of drug effects in the body. [ 37 ] Many different aspects of precision medicine are tested in research settings (e.g., proteome, microbiome), but in routine practice not all available inputs are used. The ability to practice precision medicine is also dependent on the knowledge bases available to assist clinicians in taking action based on test results. [ 38 ] [ 39 ] [ 40 ] Early studies applying omics -based precision medicine to cohorts of individuals with undiagnosed disease has yielded a diagnosis rate ~35% with ~1 in 5 of newly diagnosed receiving recommendations regarding changes in therapy. [ 41 ] It has been suggested that until pharmacogenetics becomes further developed and able to predict individual treatment responses, the N-of-1 trials are the best method of identifying patients responding to treatments. [ 42 ] [ 43 ]
On the treatment side, PM can involve the use of customized medical products such drug cocktails produced by pharmacy compounding [ 44 ] or customized devices. [ 45 ] It can also prevent harmful drug interactions, increase overall efficiency when prescribing medications, and reduce costs associated with healthcare. [ 46 ]
The question of who benefits from publicly funded genomics is an important public health consideration, and attention is needed to ensure that implementation of genomic medicine does not further entrench social‐equity concerns. [ 47 ]
Artificial intelligence is providing a paradigm shift toward precision medicine. [ 48 ] Machine learning algorithms are used for genomic sequence and to analyze and draw inferences from the vast amounts of data patients and healthcare institutions recorded in every moment. [ 49 ] AI techniques are used in precision cardiovascular medicine to understand genotypes and phenotypes in existing diseases, improve the quality of patient care, enable cost-effectiveness, and reduce readmission and mortality rates. [ 50 ] A 2021 paper reported that machine learning was able to predict the outcomes of Phase III clinical trials (for treatment of prostate cancer) with 76% accuracy. [ 51 ] This suggests that clinical trial data could provide a practical source for machine learning-based tools for precision medicine. [ citation needed ]
Precision medicine may be susceptible to subtle forms of algorithmic bias . For example, the presence of multiple entry fields with values entered by multiple observers can create distortions in the ways data is understood and interpreted. [ 52 ] A 2020 paper showed that training machine learning models in a population-specific fashion (i.e. training models specifically for Black cancer patients) can yield significantly superior performance than population-agnostic models. [ 53 ]
In his 2015 State of the Union address , then- U.S. President Barack Obama stated his intention to give $215 million [ 54 ] of funding to the " Precision Medicine Initiative " of the United States National Institutes of Health . [ 55 ] A short-term goal of this initiative was to expand cancer genomics to develop better prevention and treatment methods. [ 56 ] In the long term, the Precision Medicine Initiative aimed to build a comprehensive scientific knowledge base by creating a national network of scientists and embarking on a national cohort study of one million Americans to expand our understanding of health and disease. [ 57 ] The mission statement of the Precision Medicine Initiative read: "To enable a new era of medicine through research, technology, and policies that empower patients, researchers, and providers to work together toward development of individualized treatments". [ 58 ] In 2016 this initiative was renamed to "All of Us" and by January 2018, 10,000 people had enrolled in its pilot phase . [ 59 ]
Precision medicine helps health care providers better understand the many things—including environment, lifestyle, and heredity—that play a role in a patient's health, disease, or condition. This information lets them more accurately predict which treatments will be most effective and safe, or possibly how to prevent the illness from starting in the first place. In addition, benefits are to: [ citation needed ]
Advances in personalised medicine will create a more unified treatment approach specific to the individual and their genome. Personalised medicine may provide better diagnoses with earlier intervention, and more efficient drug development and more targeted therapies. [ 60 ]
Having the ability to look at a patient on an individual basis will allow for a more accurate diagnosis and specific treatment plan. Genotyping is the process of obtaining an individual's DNA sequence by using biological assays . [ 61 ] By having a detailed account of an individual's DNA sequence, their genome can then be compared to a reference genome, like that of the Human Genome Project , to assess the existing genetic variations that can account for possible diseases. A number of private companies, such as 23andMe , Navigenics , and Illumina , have created Direct-to-Consumer genome sequencing accessible to the public. [ 17 ] Having this information from individuals can then be applied to effectively treat them. An individual's genetic make-up also plays a large role in how well they respond to a certain treatment, and therefore, knowing their genetic content can change the type of treatment they receive. [ citation needed ]
An aspect of this is pharmacogenomics , which uses an individual's genome to provide a more informed and tailored drug prescription. [ 62 ] Often, drugs are prescribed with the idea that it will work relatively the same for everyone, but in the application of drugs, there are a number of factors that must be considered. The detailed account of genetic information from the individual will help prevent adverse events, allow for appropriate dosages, and create maximum efficacy with drug prescriptions. [ 17 ] For instance, warfarin is the FDA approved oral anticoagulant commonly prescribed to patients with blood clots. Due to warfarin 's significant interindividual variability in pharmacokinetics and pharmacodynamics , its rate of adverse events is among the highest of all commonly prescribed drugs. [ 4 ] However, with the discovery of polymorphic variants in CYP2C9 and VKORC1 genotypes, two genes that encode the individual anticoagulant response, [ 63 ] [ 64 ] physicians can use patients' gene profile to prescribe optimum doses of warfarin to prevent side effects such as major bleeding and to allow sooner and better therapeutic efficacy. [ 4 ] The pharmacogenomic process for discovery of genetic variants that predict adverse events to a specific drug has been termed toxgnostics . [ 65 ]
An aspect of a theranostic platform applied to personalized medicine can be the use of diagnostic tests to guide therapy. The tests may involve medical imaging such as MRI contrast agents (T1 and T2 agents), fluorescent markers ( organic dyes and inorganic quantum dots ), and nuclear imaging agents ( PET radiotracers or SPECT agents). [ 8 ] [ 66 ] or in vitro lab test [ 67 ] including DNA sequencing [ 68 ] and often involve deep learning algorithms that weigh the result of testing for several biomarkers . [ 69 ]
In addition to specific treatment, personalised medicine can greatly aid the advancements of preventive care. For instance, many women are already being genotyped for certain mutations in the BRCA1 and BRCA2 gene if they are predisposed because of a family history of breast cancer or ovarian cancer. [ 70 ] As more causes of diseases are mapped out according to mutations that exist within a genome, the easier they can be identified in an individual. Measures can then be taken to prevent a disease from developing. Even if mutations were found within a genome, having the details of their DNA can reduce the impact or delay the onset of certain diseases. [ 60 ] Having the genetic content of an individual will allow better guided decisions in determining the source of the disease and thus treating it or preventing its progression. This will be extremely useful for diseases like Alzheimer 's or cancers that are thought to be linked to certain mutations in our DNA. [ 60 ]
A tool that is being used now to test efficacy and safety of a drug specific to a targeted patient group/sub-group is companion diagnostics . This technology is an assay that is developed during or after a drug is made available on the market and is helpful in enhancing the therapeutic treatment available based on the individual. [ 71 ] These companion diagnostics have incorporated the pharmacogenomic information related to the drug into their prescription label in an effort to assist in making the most optimal treatment decision possible for the patient. [ 71 ]
Having an individual's genomic information can be significant in the process of developing drugs as they await approval from the FDA for public use. Having a detailed account of an individual's genetic make-up can be a major asset in deciding if a patient can be chosen for inclusion or exclusion in the final stages of a clinical trial. [ 60 ] Being able to identify patients who will benefit most from a clinical trial will increase the safety of patients from adverse outcomes caused by the product in testing, and will allow smaller and faster trials that lead to lower overall costs. [ 72 ] In addition, drugs that are deemed ineffective for the larger population can gain approval by the FDA by using personal genomes to qualify the effectiveness and need for that specific drug or therapy even though it may only be needed by a small percentage of the population., [ 60 ] [ 73 ]
Physicians commonly use a trial and error strategy until they find the treatment therapy that is most effective for their patient. [ 60 ] With personalized medicine, these treatments can be more specifically tailored by predicting how an individual's body will respond and if the treatment will work based on their genome. [ 17 ] This has been summarized as "therapy with the right drug at the right dose in the right patient." [ 74 ] Such an approach would also be more cost-effective and accurate. [ 60 ] For instance, Tamoxifen used to be a drug commonly prescribed to women with ER+ breast cancer, but 65% of women initially taking it developed resistance. After research by people such as David Flockhart , it was discovered that women with certain mutation in their CYP2D6 gene, a gene that encodes the metabolizing enzyme, were not able to efficiently break down Tamoxifen, making it an ineffective treatment for them. [ 75 ] Women are now genotyped for these specific mutations to select the most effective treatment. [ citation needed ]
Screening for these mutations is carried out via high-throughput screening or phenotypic screening . Several drug discovery and pharmaceutical companies are currently utilizing these technologies to not only advance the study of personalised medicine, but also to amplify genetic research . Alternative multi-target approaches to the traditional approach of "forward" transfection library screening can entail reverse transfection or chemogenomics . [ citation needed ]
Pharmacy compounding is another application of personalised medicine. Though not necessarily using genetic information, the customized production of a drug whose various properties (e.g. dose level, ingredient selection, route of administration, etc.) are selected and crafted for an individual patient is accepted as an area of personalised medicine (in contrast to mass-produced unit doses or fixed-dose combinations) . Computational and mathematical approaches for predicting drug interactions are also being developed. For example, phenotypic response surfaces model the relationships between drugs, their interactions, and an individual's biomarkers. [ citation needed ]
One active area of research is efficiently delivering personalized drugs generated from pharmacy compounding to the disease sites of the body. [ 5 ] For instance, researchers are trying to engineer nanocarriers that can precisely target the specific site by using real-time imaging and analyzing the pharmacodynamics of the drug delivery . [ 76 ] Several candidate nanocarriers are being investigated, such as iron oxide nanoparticles , quantum dots , carbon nanotubes , gold nanoparticles , and silica nanoparticles. [ 8 ] Alteration of surface chemistry allows these nanoparticles to be loaded with drugs, as well as to avoid the body's immune response, making nanoparticle-based theranostics possible. [ 5 ] [ 8 ] Nanocarriers' targeting strategies are varied according to the disease. For example, if the disease is cancer, a common approach is to identify the biomarker expressed on the surface of cancer cells and to load its associated targeting vector onto nanocarrier to achieve recognition and binding; the size scale of the nanocarriers will also be engineered to reach the enhanced permeability and retention effect (EPR) in tumor targeting. [ 8 ] If the disease is localized in the specific organ, such as the kidney, the surface of the nanocarriers can be coated with a certain ligand that binds to the receptors inside that organ to achieve organ-targeting drug delivery and avoid non-specific uptake. [ 77 ] Despite the great potential of this nanoparticle-based drug delivery system, the significant progress in the field is yet to be made, and the nanocarriers are still being investigated and modified to meet clinical standards. [ 8 ] [ 76 ]
Theranostics is a personalized approach in nuclear medicine , using similar molecules for both imaging (diagnosis) and therapy. [ 78 ] [ 79 ] [ 80 ] The term is a portmanteau of " therapeutics " and " diagnostics ". Its most common applications are attaching radionuclides (either gamma or positron emitters) to molecules for SPECT or PET imaging, or electron emitters for radiotherapy . [ citation needed ] One of the earliest examples is the use of radioactive iodine for treatment of people with thyroid cancer . [ 78 ] Other examples include radio-labelled anti- CD20 antibodies (e.g. Bexxar ) for treating lymphoma , Radium-223 for treating bone metastases , Lutetium-177 DOTATATE for treating neuroendocrine tumors and Lutetium-177 PSMA for treating prostate cancer . [ 78 ] A commonly used reagent is fluorodeoxyglucose , using the isotope fluorine-18 . [ 81 ]
Respiratory diseases affect humanity globally, with chronic lung diseases (e.g., asthma, chronic obstructive pulmonary disease, idiopathic pulmonary fibrosis, among others) and lung cancer causing extensive morbidity and mortality. These conditions are
highly heterogeneous and require an early diagnosis. However, initial symptoms are nonspecific, and the clinical diagnosis is made late frequently. Over the last few years, personalized medicine has emerged as a medical care approach that uses novel technology [ 7 ] aiming to personalize treatments according to the particular patient's medical needs. In specific, proteomics is used to analyze a series of protein expressions, instead of a single biomarker . [ 82 ] Proteins control the body's biological activities including health and disease, so proteomics is helpful in early diagnosis. In the case of respiratory disease, proteomics analyzes several biological samples including serum, blood cells, bronchoalveolar lavage fluids (BAL), nasal lavage fluids (NLF), sputum, among others. [ 82 ] The identification and quantification of complete protein expression from these biological samples are conducted by mass spectrometry and advanced analytical techniques. [ 83 ] Respiratory proteomics has made significant progress in the development of personalized medicine for supporting health care in recent years. For example, in a study conducted by Lazzari et al. in 2012, the proteomics-based approach has made substantial improvement in identifying multiple biomarkers of lung cancer that can be used in tailoring personalized treatments for individual patients. [ 84 ] More and more studies have demonstrated the usefulness of proteomics to provide targeted therapies for respiratory disease. [ 82 ]
Over recent decades cancer research has discovered a great deal about the genetic variety of types of cancer that appear the same in traditional pathology . There has also been increasing awareness of tumor heterogeneity , or genetic diversity within a single tumor. Among other prospects, these discoveries raise the possibility of finding that drugs that have not given good results applied to a general population of cases may yet be successful for a proportion of cases with particular genetic profiles.
Personalized oncogenomics is the application of personalized medicine to cancer genomics. High-throughput sequencing methods are used to characterize genes associated with cancer to better understand disease pathology and improve drug development . Oncogenomics is one of the most promising branches of genomics , particularly because of its implications in drug therapy. Examples of this include:
Through the use of genomics ( microarray ), proteomics (tissue array), and imaging ( fMRI , micro-CT ) technologies, molecular-scale information about patients can be easily obtained. These so-called molecular biomarkers have proven powerful in disease prognosis, such as with cancer. [ 89 ] [ 90 ] [ 91 ] The main three areas of cancer prediction fall under cancer recurrence, cancer susceptibility and cancer survivability. [ 92 ] Combining molecular scale information with macro-scale clinical data, such as patients' tumor type and other risk factors, significantly improves prognosis. [ 92 ] Consequently, given the use of molecular biomarkers, especially genomics, cancer prognosis or prediction has become very effective, especially when screening a large population. [ 93 ] Essentially, population genomics screening can be used to identify people at risk for disease, which can assist in preventative efforts. [ 93 ]
Genetic data can be used to construct polygenic scores , which estimate traits such as disease risk by summing the estimated effects of individual variants discovered through a GWAS. These have been used for a wide variety of conditions, such as cancer, diabetes, and coronary artery disease. [ 94 ] [ 95 ] Many genetic variants are associated with ancestry, and it remains a challenge to both generate accurate estimates and to decouple biologically relevant variants from those that are coincidentally associated. Estimates generated from one population do not usually transfer well to others, requiring sophisticated methods and more diverse and global data. [ 96 ] [ 97 ] Most studies have used data from those with European ancestry, leading to calls for more equitable genomics practices to reduce health disparities. [ 98 ] Additionally, while polygenic scores have some predictive accuracy, their interpretations are limited to estimating an individual's percentile and translational research is needed for clinical use. [ 99 ]
As personalised medicine is practiced more widely, a number of challenges arise. The current approaches to intellectual property rights, reimbursement policies, patient privacy, data biases and confidentiality as well as regulatory oversight will have to be redefined and restructured to accommodate the changes personalised medicine will bring to healthcare. [ 100 ] For instance, a survey performed in the UK concluded that 63% of UK adults are not comfortable with their personal data being used for the sake of utilizing AI in the medical field. [ 101 ] Furthermore, the analysis of acquired diagnostic data is a recent challenge of personalized medicine and its implementation. [ 38 ] For example, genetic data obtained from next-generation sequencing requires computer-intensive data processing prior to its analysis. [ 102 ] In the future, adequate tools will be required to accelerate the adoption of personalised medicine to further fields of medicine, which requires the interdisciplinary cooperation of experts from specific fields of research, such as medicine , clinical oncology , biology , and artificial intelligence . [ citation needed ]
The U.S. Food and Drug Administration (FDA) has started taking initiatives to integrate personalised medicine into their regulatory policies . In October 2013, the agency published a report entitled " Paving the Way for Personalized Medicine: FDA's role in a New Era of Medical Product Development ," in which they outlined steps they would have to take to integrate genetic and biomarker information for clinical use and drug development. [ 72 ] These included developing specific regulatory standards , research methods and reference materials . [ 72 ] An example of the latter category they were working on is a "genomic reference library", aimed at improving quality and reliability of different sequencing platforms. [ 72 ] A major challenge for those regulating personalized medicine is a way to demonstrate its effectiveness relative to the current standard of care . [ 103 ] The new technology must be assessed for both clinical and cost effectiveness, and as of 2013 [update] , regulatory agencies had no standardized method. [ 103 ]
As with any innovation in medicine, investment and interest in personalised medicine is influenced by intellectual property rights. [ 100 ] There has been a lot of controversy regarding patent protection for diagnostic tools, genes, and biomarkers. [ 104 ] In June 2013, the U.S. Supreme Court ruled that natural occurring genes cannot be patented, while "synthetic DNA" that is edited or artificially- created can still be patented. The Patent Office is currently reviewing a number of issues related to patent laws for personalised medicine, such as whether "confirmatory" secondary genetic tests post initial diagnosis, can have full immunity from patent laws. Those who oppose patents argue that patents on DNA sequences are an impediment to ongoing research while proponents point to research exemption and stress that patents are necessary to entice and protect the financial investments required for commercial research and the development and advancement of services offered. [ 104 ]
Reimbursement policies will have to be redefined to fit the changes that personalised medicine will bring to the healthcare system. Some of the factors that should be considered are the level of efficacy of various genetic tests in the general population, cost-effectiveness relative to benefits, how to deal with payment systems for extremely rare conditions, and how to redefine the insurance concept of "shared risk" to incorporate the effect of the newer concept of "individual risk factors". [ 100 ] The study, Barriers to the Use of Personalized Medicine in Breast Cancer , took two different diagnostic tests which are BRACAnalysis and Oncotype DX. These tests have over ten-day turnaround times which results in the tests failing and delays in treatments. Patients are not being reimbursed for these delays which results in tests not being ordered. Ultimately, this leads to patients having to pay out-of-pocket for treatments because insurance companies do not want to accept the risks involved. [ 105 ]
Perhaps the most critical issue with the commercialization of personalised medicine is the protection of patients. One of the largest issues is the fear and potential consequences for patients who are predisposed after genetic testing or found to be non-responsive towards certain treatments. This includes the psychological effects on patients due to genetic testing results. The right of family members who do not directly consent is another issue, considering that genetic predispositions and risks are inheritable. The implications for certain ethnic groups and presence of a common allele would also have to be considered. [ 100 ]
Moreover, we could refer to the privacy issue at all layers of personalized medicine from discovery to treatment. One of the leading issues is the consent of the patients to have their information used in genetic testing algorithms primarily AI algorithms. The consent of the institution who is providing the data to be used is of prominent concern as well. [ 101 ] In 2008, the Genetic Information Nondiscrimination Act (GINA) was passed in an effort to minimize the fear of patients participating in genetic research by ensuring that their genetic information will not be misused by employers or insurers. [ 100 ] On February 19, 2015, FDA issued a press release titled: "FDA permits marketing of first direct-to-consumer genetic carrier test for Bloom syndrome. [ 6 ]
Data biases also play an integral role in personalized medicine. It is important to ensure that the sample of genes being tested come from different populations. This is to ensure that the samples do not exhibit the same human biases we use in decision making. [ 106 ]
Consequently, if the designed algorithms for personalized medicine are biased, then the outcome of the algorithm will also be biased because of the lack of genetic testing in certain populations. [ 107 ] For instance, the results from the Framingham Heart Study have led to biased outcomes of predicting the risk of cardiovascular disease. This is because the sample was tested only on white people and when applied to the non-white population, the results were biased with overestimation and underestimation risks of cardiovascular disease. [ 108 ]
Several issues must be addressed before personalized medicine can be implemented. Very little of the human genome has been analyzed, and even if healthcare providers had access to a patient's full genetic information, very little of it could be effectively leveraged into treatment. [ 109 ] Challenges also arise when processing such large amounts of genetic data. Even with error rates as low as 1 per 100 kilobases, processing a human genome could have roughly 30,000 errors. [ 110 ] This many errors, especially when trying to identify specific markers, can make discoveries and verifiability difficult. There are methods to overcome this, but they are computationally taxing and expensive. There are also issues from an effectiveness standpoint, as after the genome has been processed, function in the variations among genomes must be analyzed using genome-wide studies. While the impact of the SNPs discovered in these kinds of studies can be predicted, more work must be done to control for the vast amounts of variation that can occur because of the size of the genome being studied. [ 110 ] In order to effectively move forward in this area, steps must be taken to ensure the data being analyzed is good, and a wider view must be taken in terms of analyzing multiple SNPs for a phenotype. The most pressing issue that the implementation of personalized medicine is to apply the results of genetic mapping to improve the healthcare system. This is not only due to the infrastructure and technology required for a centralized database of genome data, but also the physicians that would have access to these tools would likely be unable to fully take advantage of them. [ 110 ] In order to truly implement a personalized medicine healthcare system, there must be an end-to-end change.
The Copenhagen Institute for Futures Studies and Roche set up FutureProofing Healthcare [ 111 ] which produces a Personalised Health Index, rating different countries performance against 27 different indicators of personalised health across four categories called 'Vital Signs'. They have run conferences in many countries to examine their findings. [ 112 ] [ 113 ] | https://en.wikipedia.org/wiki/Precision_medicine |
Preclinical or small-animal Single Photon Emission Computed Tomography ( SPECT ) is a radionuclide based molecular imaging modality for small laboratory animals [ 1 ] (e.g. mice and rats). Although SPECT is a well-established imaging technique that is already for decades in use for clinical application, the limited resolution of clinical SPECT (~10 mm) stimulated the development of dedicated small animal SPECT systems with sub-mm resolution. Unlike in clinics, preclinical SPECT outperforms preclinical coincidence PET in terms of resolution (best spatial resolution of SPECT - 0.25mm, [ 2 ] PET ≈ 1 mm [ 3 ] [ 4 ] ) and, at the same time, allows to perform fast dynamic imaging of animals (less than 15s time frames [ 5 ] ).
SPECT imaging requires administration of small quantities of γ-emitting radiolabeled molecules (commonly called " tracers ") into the animal prior to the image acquisition. These tracers are biochemically designed in such a way that they accumulate at target locations in the body. The radiation emitted by the tracer molecules (single γ-photons ) can be detected by gamma detectors and, after image reconstruction, results in a 3-dimensional image of the tracer distribution within the animal. Some key radioactive isotopes used in preclinical SPECT are 99m Tc , 123 I , 125 I , 131 I , 111 In , 67 Ga and 201 Tl .
Preclinical SPECT plays an important role in multiple areas of translational research [ 6 ] where SPECT can be used for non-invasive imaging of radiolabeled molecules, including antibodies, peptides, and nanoparticles. Among major areas of its applications are oncology, neurology, psychiatry, cardiology, orthopedics, pharmacology and internal medicine.
Due to the small size of the imaged animals (a mouse is about 3000 times smaller than a human measured by weight and volume), it is essential to have a high spatial resolution and detection efficiency for the preclinical scanner.
Looking at spatial resolution first, if we want to see the same level of details relatively to e.g. the size of the organs in a mouse as we can see in a human, the spatial resolution of clinical SPECT needs to be improved by a factor of 3000 3 ≈ 15 {\displaystyle {\sqrt[{3}]{3000}}\approx 15} or higher. Such an obstacle forced scientists to look for a new imaging approach for preclinical SPECT that was found in exploiting the pinhole imaging principle. [ 7 ]
A pinhole collimator consists of a piece of dense material containing only a single hole, which typically has the shape of a double cone. First attempts to obtain SPECT images of rodents with a high resolution were based on use of pinhole collimators attached to convectional gamma cameras. [ 8 ] [ 9 ] In such a way, by placing the object (e.g. rodent) close to the aperture of the pinhole, one can reach a high magnification of its projection on the detector surface and effectively compensate for the limited intrinsic resolution of the detector.
The combined effects of the finite aperture size and the limited intrinsic resolution are described by:
R = ( d e M ) 2 + ( R i M ) 2 {\displaystyle R={\sqrt {{\left({d_{e} \over M}\right)^{2}}+{\left({R_{i} \over M}\right)^{2}}}}} [ 10 ]
d e - effective pinhole diameter, R i - intrinsic resolution of the detector, M – projection magnification factor.
The resolution of a SPECT system based on the pinhole imaging principle can be improved in one of three ways:
The exact size, shape and material of the pinhole are important to obtain good imaging characteristics and is a subject of collimator design optimization studies via e.g. use of Monte Carlo simulations .
Modern preclinical SPECT scanners based on pinhole imaging can reach up to 0.25 mm spatial or 0.015 μL volumetric resolution for in vivo mouse imaging.
The detection efficiency or sensitivity of a preclinical pinhole SPECT system is determined by: [ 10 ] [ 11 ]
S = N d e 2 16 r c 2 {\displaystyle S={{N{d_{e}}^{2}} \over {{16r_{c}}^{2}}}}
S – detection efficiency (sensitivity), d e -effective pinhole diameter with penetration, N – total number of pinholes, r c – collimator radius (e.g. object-to-pinhole distance).
The sensitivity can be improved by:
Possible drawbacks: degradation of spatial resolution
Possible drawbacks: When multiple pinhole projections are projected on a single detector surface, they can either overlap each other (multiplexing projections) or be fully separated (non-overlapping projections). Although pinhole collimators with multiplexing projections allow reaching a higher sensitivity (by allowing to use a higher number of pinholes) when compared to non-overlapping designs, they also suffer from multiple artifacts in reconstructed SPECT images. [ 12 ] [ 13 ] [ 14 ] [ 15 ] The artifacts are cause by ambiguity about the origin of γ -photons detected in the areas of the overlap.
Placing the animal close to the pinhole aperture comes at the cost of reducing the size of the area that can be imaged at a given time (the "field-of-view") compared to imaging at a lower magnification. However, when combined with moving the animal (the so-called "scanning-focus method" [ 16 ] ) a larger area of interest can still be imaged with a good resolution and sensitivity.
The typical detection efficiency of preclinical SPECT scanner lies within a 0.1-0.2% (1000-2000 cps/MBq) range, which is more than tenfold higher than the average sensitivity of clinical scanners. [ 17 ] At the same time, dedicated high-sensitivity collimators can allow >1% detection efficiency and maintain sub-mm image resolution. [ 18 ]
Multiple pinhole SPECT system designs have been proposed, including rotating gamma camera, stationary detector but rotating collimator, or completely stationary camera [ 19 ] [ 20 ] in which a large number of pinholes surround the animal and simultaneously acquire projections from a sufficient number of angles for tomographic image reconstruction. Stationary systems have several advantages over non-stationary systems:
Why: due to the stable position of the detector(s) and the collimator
Why: because all required angular information is acquired simultaneously by multiple pinholes.
Modern stationary preclinical SPECT systems can perform dynamic SPECT imaging with up to 15s time-frames during total body [ 5 ] and up to 1s time-frames during "focused" (e.g. focusing on heart) [ 16 ] image acquisitions.
Medical imaging encompasses many different imaging modalities, which can roughly be divided into anatomical and functional imaging. Anatomical modalities (e.g. CT , MRI ) mainly reveal the structure of the tissues and organs, while the functional modalities (SPECT, PET and optical imaging ) mainly visualize the physiology and function of the tissue. Because none of the existing imaging modalities can provide information on all aspects of structure and function, an obvious approach is to either alter one imaging modality to the task (e.g. special imaging sequences in MRI) or to try to image a subject using multiple imaging modalities. Following the multimodality approach, in recent years the combination of a SPECT/CT system became a standard molecular imaging modality combination in both the pre-clinical and clinical fields, where the structural information of CT complements the functional information from SPECT. Nevertheless, integration of SPECT with other imaging modalities (e.g. SPECT/MR, SPECT/PET/CT [ 6 ] [ 21 ] ) is not uncommon.
A SPECT measurement consists of 2-dimensional projections of the radioactive source distribution that are obtained with collimator(s) and gamma-detector(s). It is the goal of an image reconstruction algorithm to accurately reconstruct the unknown 3-dimensional distribution of the radioactivity. [ 22 ]
The Maximum Likelihood Expectation Maximization algorithm [ 23 ] [ 24 ] ( MLEM ) is an important "gold standard" in iterative image reconstruction of SPECT images, but it is also a computationally costly method. A popular solution of this obstacle is based on the use of so-called block-iterative reconstruction methods. With block-iterative methods, every iteration of the algorithm is subdivided into many subsequent sub-iterations, each using a different subset of the projection data. An example of a widely used block-iterative version of MLEM is the Ordered Subsets Expectation Maximization algorithm [ 25 ] (OSEM). The reconstruction speedup of a full iteration OSEM over a single iteration MLEM is approximately equal to the number of subsets.
Preclinical SPECT is a quantitative imaging modality. The uptake of SPECT tracers in organs (regions) of interest can be calculated from reconstructed images. The small size of laboratory animals diminishes the photon’s attenuation in the body of the animal (compared to one in human-sized objects). Nevertheless, depending on the energy of γ-photons and the size of the animal that is used for imaging, correction for photon attenuation and scattering might be required to provide good quantification accuracy. A detailed discussion about effects affecting quantification of SPECT images can be found in Hwang et al. [ 26 ]
SPECT tracers emit single γ-photons with the energy of the emitted photon depending on the isotope that was used for radiolabeling of the tracer. Thus, in cases when different tracers are radiolabeled with isotopes of different energies, SPECT provides the ability to probe several molecular pathways simultaneously (multi-isotope imaging). Two examples of common multi-isotope tracer combinations used for SPECT imaging are 123 I- NaI / 99m Tc- pertechnetate (thyroid function [ 27 ] ) or 99m Tc-MAG3/ 111 In- DTPA (assessment of renal filtration ).
The time the tracer can be followed in vivo strongly depends on the half-life of the isotope used for radiolabeling of the compound. The wide range of relatively long-lived isotopes (compared to the isotopes typically used in PET) that can be used for SPECT imaging provide a unique possibility to image slow kinetic processes (days to weeks).
Another important characteristic of SPECT is the simplicity of tracer radiolabeling procedure that can be performed with a wide range of commercially available labelling kits.
Preclinical SPECT and PET are two very similar molecular imaging modalities used for noninvasive visualization of biodistribution of radiolabel tracers that are injected into an animal. The major difference between SPECT and PET lies in the nature of the radioactive decay of their tracers. SPECT tracer emits single γ-photons with the energy of photons that depends on the isotope that was used for radiolabeling. In PET, the tracer emits positrons that, after annihilation with electrons in the subject, produce a pair of 511 keV annihilation photons emitted into opposite directions. Coincidental detection of these annihilation photons is used for image formation in PET. As a result, different detection principles have been developed for SPECT and PET tracers, which has led to separate SPECT and PET scanners.
Comparison of preclinical SPECT and PET is provided in the table below
Manufacturers of preclinical SPECT systems include MILabs, Siemens , Bruker , Mediso and MOLECUBES. [ 29 ] [ 30 ] Systems are available combining SPECT with multiple other modalities including MR , PET and CT . [ 31 ] [ 32 ] They can achieve up to 0.25 mm spatial resolution (0.015 μL volumetric resolution) and up to 1 second-frame dynamic noninvasive SPECT imaging of rodents. [ 33 ]
SPECT can be used for diagnostic or therapeutic imaging. When a radioactive tracer is labeled with primary gamma-emitting isotopes (e.g. 99m Tc, 123 I, 111 In, 125 I), the acquired images provide functional information about the bio-distribution of the compound that can be used for multiple diagnostic purposes. Examples of diagnostic applications: metabolism and perfusion imaging, cardiology, orthopedics.
When SPECT tracer is labeled with a combined gamma and α- or β-emitting isotope (e.g. 213 Bi or 131 I), it is possible to combine cancer radioisotope therapy with α- or β- particles with noninvasive imaging of response to the therapy that is achieved with SPECT. | https://en.wikipedia.org/wiki/Preclinical_SPECT |
In drug development , preclinical development (also termed preclinical studies or nonclinical studies ) is a stage of research that begins before clinical trials (testing in humans) and during which important feasibility, iterative testing and drug safety data are collected, typically in laboratory animals.
The main goals of preclinical studies are to determine a starting, safe dose for first-in-human study and assess potential toxicity of the product, which typically include new medical devices , prescription drugs , and diagnostics .
Companies use stylized statistics to illustrate the risks in preclinical research, such as that on average, only one in every 5,000 compounds that enters drug discovery to the stage of preclinical development becomes an approved drug . [ 1 ] [ 2 ]
Each class of product may undergo different types of preclinical research. For instance, drugs may undergo pharmacodynamics (what the drug does to the body) (PD), pharmacokinetics (what the body does to the drug) (PK), ADME , and toxicology testing . This data allows researchers to allometrically estimate a safe starting dose of the drug for clinical trials in humans. Medical devices that do not have drug attached will not undergo these additional tests and may go directly to good laboratory practices (GLP) testing for safety of the device and its components. Some medical devices will also undergo biocompatibility testing which helps to show whether a component of the device or all components are sustainable in a living model. Most preclinical studies must adhere to GLPs in ICH Guidelines to be acceptable for submission to regulatory agencies such as the Food & Drug Administration in the United States.
Typically, both in vitro and in vivo tests will be performed. Studies of drug toxicity include which organs are targeted by that drug, as well as if there are any long-term carcinogenic effects or toxic effects causing illness.
The information collected from these studies is vital so that safe human testing can begin. Typically, in drug development studies animal testing involves two species. The most commonly used models are murine and canine , although primate and porcine are also used.
The choice of species is based on which will give the best correlation to human trials. Differences in the gut , enzyme activity , circulatory system , or other considerations make certain models more appropriate based on the dosage form , site of activity, or noxious metabolites . For example, canines may not be good models for solid oral dosage forms because the characteristic carnivore intestine is underdeveloped compared to the omnivore's, and gastric emptying rates are increased. Also, rodents can not act as models for antibiotic drugs because the resulting alteration to their intestinal flora causes significant adverse effects . Depending on a drug's functional groups, it may be metabolized in similar or different ways between species, which will affect both efficacy and toxicology.
Medical device studies also use this basic premise. Most studies are performed in larger species such as dogs, pigs and sheep which allow for testing in a similar sized model as that of a human. In addition, some species are used for similarity in specific organs or organ system physiology (swine for dermatological and coronary stent studies; goats for mammary implant studies; dogs for gastric and cancer studies; etc.).
Importantly, the regulatory guidelines of FDA , EMA , and other similar international and regional authorities usually require safety testing in at least two mammalian species, including one non-rodent species, prior to human trials authorization. [ 3 ]
Animal testing in the research-based pharmaceutical industry has been reduced in recent years both for ethical and cost reasons. However, most research will still involve animal based testing for the need of similarity in anatomy and physiology that is required for diverse product development.
Based on preclinical trials, no-observed-adverse-effect levels (NOAELs) on drugs are established, which are used to determine initial phase 1 clinical trial dosage levels on a mass API per mass patient basis. Generally a 1/100 uncertainty factor or "safety margin" is included to account for interspecies (1/10) and inter-individual (1/10) differences. | https://en.wikipedia.org/wiki/Preclinical_development |
Precocial species in birds and mammals are those in which the young are relatively mature and mobile from the moment of birth or hatching. They are normally nidifugous , meaning that they leave the nest shortly after birth or hatching. Altricial species are those in which the young are underdeveloped at the time of birth, but with the aid of their parents mature after birth. These categories form a continuum, without distinct gaps between them.
In fish , this often refers to the presence or absence of a stomach : precocial larvae have one at the onset of first feeding whereas altricial fish do not. [ 1 ] Depending on the species, the larvae may develop a functional stomach during metamorphosis (gastric) or remain stomachless (agastric).
Precocial young have open eyes, hair or down, large brains, and are immediately mobile and somewhat able to flee from or defend themselves against predators. For example, with ground-nesting birds such as ducks or turkeys , the young are ready to leave the nest in one or two days. Among mammals, most ungulates are precocial, being able to walk almost immediately after birth.
The word "precocial" is derived from the Latin root praecox, the same root as in precocious , meaning early maturity. [ 2 ]
Extremely precocial species are called "superprecocial". Examples are the megapode birds, which have full-flight feathers at hatching and which, in some species, can fly on the same day. [ 3 ] Enantiornithes [ 4 ] and pterosaurs [ citation needed ] were also capable of flight soon after hatching.
Another example is the blue wildebeest , the calves of which can stand within an average of six minutes from birth and walk within thirty minutes; [ 5 ] [ 6 ] they can outrun a hyena within a day. [ 7 ] [ 8 ] [ 9 ] Such behavior gives them an advantage over other herbivore species and they are 100 times more abundant in the Serengeti ecosystem than hartebeests , their closest taxonomic relative. Hartebeest calves are not as precocial as wildebeest calves and take up to thirty minutes or more before they stand, and as long as forty-five minutes before they can follow their mothers for short distances. They are unable to keep up with their mothers until they are more than a week old. [ 9 ]
Black mambas are highly precocial; as hatchlings, they are fully independent, and are capable of hunting prey the size of a small rat . [ 10 ]
Precociality is thought to be ancestral in birds. Thus, altricial birds tend to be found in the most derived groups. There is some evidence for precociality in protobirds [ 11 ] and troodontids . [ 12 ] Enantiornithes at least were superprecocial in a way similar to that of megapodes, being able to fly soon after birth. [ 4 ] It has been speculated that superprecociality prevented enantiornithines from acquiring specialized toe anatomy seen in modern altricial birds. [ 13 ]
In birds and mammals , altricial species are those whose newly hatched or born young are relatively immobile, lack hair or down , are not able to obtain food on their own, and must be cared for by adults; closed eyes are common, though not ubiquitous. Altricial young are born helpless and require care for a length of time. Altricial birds include hawks , herons , woodpeckers , owls , cuckoos and most passerines . Among mammals, marsupials and most rodents are altricial. Domestic cats , dogs , and primates , such as humans , are some of the best-known altricial organisms. [ 14 ] For example, newborn domestic cats cannot see, hear, maintain their own body temperature, or gag , and require external stimulation in order to defecate and urinate. [ 15 ] The giant panda is notably the largest placental mammal to have altricial, hairless young upon birth. The larval stage of insect development is considered by some to be a form of altricial development, but it more accurately depicts, especially amongst eusocial animals, an independent phase of development, as the larvae of bees, ants, and many arachnids are completely physically different from their developed forms, and the pre-pupal stages of insect life might be regarded as equivalent to vertebrate embryonic development.
The word “altriciality” is derived from the Latin root alere , meaning "to nurse, to rear, or to nourish", and indicates the need for young to be fed and taken care of for a long duration. [ 16 ]
The span between precocial and altricial species is particularly broad in the biology of birds . Precocial birds hatch with their eyes open and are covered with downy feathers that are soon replaced by adult-type feathers. [ 17 ] Birds of this kind can also swim and run much sooner after hatching than altricial young, such as songbirds. [ 17 ] Very precocial birds can be ready to leave the nest in a short period of time following hatching (e.g. 24 hours). Many precocial chicks are not independent in thermoregulation (the ability to regulate their body temperatures), and they depend on the attending parent(s) to brood them with body heat for a short time. Precocial birds find their own food, sometimes with help or instruction from their parents. Examples of precocial birds include the domestic chicken , many species of ducks and geese , waders , rails , and the hoatzin .
Precocial birds can provide protein-rich eggs and thus their young hatch in the fledgling stage – able to protect themselves from predators and the females have less post-natal involvement. Altricial birds are less able to contribute nutrients in the pre-natal stage; their eggs are smaller and their young are still in need of much attention and protection from predators. This may be related to r/K selection ; however, this association fails in some cases. [ 18 ]
In birds, altricial young usually grow faster than precocial young. This is hypothesized to occur so that exposure to predators during the nestling stage of development can be minimized. [ 19 ]
In the case of mammals, it has been suggested that large, hearty adult body sizes favor the production of large, precocious young, which develop with a longer gestation period. Large young may be associated with migratory behavior, extended reproductive period, and reduced litter size. It may be that altricial strategies in mammals, in contrast, develop in species with less migratory and more territorial lifestyles, such as Carnivorans , the mothers of which are capable of bearing a fetus in the early stages of development and focusing closely and personally upon its raising, as opposed to precocial animals which provide their youths with a bare minimum of aid and otherwise leave them to instinct. [ 20 ]
Human children, and those of other primates, exemplify a unique combination of altricial and precocial development. Infants are born with minimal eyesight, compact and fleshy bodies, and "fresh" features (thinner skin, small noses and ears, and scarce hair if any). However, this stage is brief amongst primates alone; their offspring soon develop stronger bones, grow in spurts, and quickly mature in features. This unique growth pattern allows for the hasty adaptivity of most simians, as anything learned by children in between their infancy and adolescence is memorized as instinct; this pattern is also in contrast to more prominently altricial mammals, such as many rodents , which remain largely immobile and undeveloped until grown to near the stature of their parents. [ citation needed ]
In birds, the terms Aves altrices and Aves precoces were introduced by Carl Jakob Sundevall (1836), and the terms nidifugous and nidicolous by Lorenz Oken in 1816. The two classifications were considered identical in early times, but the meanings are slightly different, in that "altricial" and "precocial" refer to developmental stages, while "nidifugous" and "nidicolous" refer to leaving or staying at the nest, respectively. [ 18 ] | https://en.wikipedia.org/wiki/Precociality_and_altriciality |
Precordial concordance , also known as QRS concordance is when all precordial leads on an electrocardiogram are either positive (positive concordance) or negative (negative concordance). [ 1 ] When there is a negative concordance, it almost always represents a life-threatening condition called ventricular tachycardia because there is no other condition that suggests any abnormal conduction from the apex of the heart to the upper parts. However, in positive concordance another rare conditions such as left side accessory pathways or blocks are also possible. [ 2 ]
This article related to medical imaging is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Precordial_concordance |
In anatomy , the precordium or praecordium is the portion of the body over the heart and lower chest . [ 1 ]
Defined anatomically, it is the area of the anterior chest wall over the heart . It is therefore usually on the left side, except in conditions like dextrocardia , where the individual's heart is on the right side. In such a case, the precordium is on the right side as well.
The precordium is naturally a cardiac area of dullness. During examination of the chest, the percussion note will therefore be dull. In fact, this area only gives a resonant percussion note in hyperinflation , emphysema or tension pneumothorax .
Precordial chest pain can be an indication of a variety of illnesses, including costochondritis and viral pericarditis .
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Precordium |
In chemistry , a precursor is a compound that participates in a chemical reaction that produces another compound.
In biochemistry , the term "precursor" often refers more specifically to a chemical compound preceding another in a metabolic pathway , such as a protein precursor .
In 1988, the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances introduced detailed provisions and requirements relating the control of precursors used to produce drugs of abuse.
In Europe the Regulation (EC) No. 273/2004 of the European Parliament and of the Council on drug precursors was adopted on 11 February 2004. ( European law on drug precursors )
On January 15, 2013, the Regulation (EU) No. 98/2013 of the European Parliament and of the Council on the marketing and use of explosives precursors was adopted.
The Regulation harmonises rules across Europe on the making available, introduction, possession and use, of certain substances or mixtures that could be misused for the illicit manufacture of explosives. [ 1 ]
A portable, advanced sensor based on infrared spectroscopy in a hollow fiber matched to a silicon-micromachined fast gas chromatography column can analyze illegal stimulants and precursors with nanogram-level sensitivity. [ 2 ]
Raman spectroscopy has been successfully tested to detect explosives and their precursors. [ 3 ]
Technologies able to detect precursors in the environment could contribute to an early location of sites where illegal substances (both explosives and drugs of abuse) are produced. [ 4 ] [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Precursor_(chemistry) |
In cell biology , precursor cells —also called blast cells —are partially differentiated, or intermediate, and are sometimes referred to as progenitor cells . A precursor cell is a stem cell with the capacity to differentiate into only one cell type, meaning they are unipotent stem cells . In embryology , precursor cells are a group of cells that later differentiate into one organ. However, progenitor cells are considered multipotent . [ 1 ]
Due to their contribution to the development of various organs and cancers, precursor and progenitor cells have many potential uses in medicine. There is ongoing research on using these cells to build heart valves, blood vessels, and other tissues by using blood and muscle precursor cells. [ 2 ]
The prospect of regenerative medicine has become increasingly more popular in recent years. Stem cell research has been gaining traction as a possible method of treatment for various human diseases.
One large subcategory of progenitor cells are neural precursor cells ( NPCs ), which consist of oligodendrocyte, astrocyte, and neuronal precursor cells. Once differentiation into these precursor cells occurs, fate restriction happens and the cells are unlikely to become another type. Some current research is exploring the ability to reverse fate restriction—allowing for precursor cells to become other types of precursor cells. [ 3 ] NPCs have a variety of applications in medicine, with research focusing on all subsets. Glial precursor cells, namely oligodendrocyte precursor cells , are being explored for application in treating leukodystrophies—including lysosomal storage disorders and hypomyelination disorders. [ 4 ]
Another group of precursor cells called endothelial precursor cells ( EPCs ), or angioblasts in embryos, are involved in vascular development. There are two developmental methods of the vascular system— vasculogenesis and angiogenesis . Vasculogenesis involves the differentiation of endothelial precursor cells into endothelial cells, which is mostly seen in embryonic development. Originally thought to play no role in adult vascular development, EPCs have demonstrated involvement in pathological neovascularization such as cancer, wound healing, and ischemia . [ 5 ]
Although relatively new, neutrophil precursor cells ( NePs ) have been studied to determine the role of neutrophil progenitor cells in cancer. Neutrophil precursor and progenitor cells are present in bone marrow . According to one study, they are also present in the blood of those diagnosed with melanoma—suggesting the release of NePs into the bloodstream from bone marrow in response to cancer. Additionally, they exhibited tumor-promoting behavior in both mice and humans. [ 6 ]
Another category of precursor cells are retinal progenitor cells . Retinal degeneration (RD) is one of the most common causes of blindness in humans—with a variety of diseases falling under the broad category. Some research is looking into the efficacy of using retinal precursor cells as a regenerative treatment for RD. [ 7 ] A variety of trials have already been conducted, most demonstrating no rejection of the transplant. [ 8 ] | https://en.wikipedia.org/wiki/Precursor_cell |
A primary transcript is the single-stranded ribonucleic acid ( RNA ) product synthesized by transcription of DNA , and processed to yield various mature RNA products such as mRNAs , tRNAs , and rRNAs . The primary transcripts designated to be mRNAs are modified in preparation for translation . For example, a precursor mRNA (pre-mRNA) is a type of primary transcript that becomes a messenger RNA (mRNA) after processing .
Pre-mRNA is synthesized from a DNA template in the cell nucleus by transcription . Pre-mRNA comprises the bulk of heterogeneous nuclear RNA (hnRNA). Once pre-mRNA has been completely processed , it is termed " mature messenger RNA ", or simply " messenger RNA ". The term hnRNA is often used as a synonym for pre-mRNA, although, in the strict sense, hnRNA may include nuclear RNA transcripts that do not end up as cytoplasmic mRNA.
There are several steps contributing to the production of primary transcripts. All these steps involve a series of interactions to initiate and complete the transcription of DNA in the nucleus of eukaryotes . Certain factors play key roles in the activation and inhibition of transcription, where they regulate primary transcript production. Transcription produces primary transcripts that are further modified by several processes. These processes include the 5' cap , 3'-polyadenylation , and alternative splicing . In particular, alternative splicing directly contributes to the diversity of mRNA found in cells. The modifications of primary transcripts have been further studied in research seeking greater knowledge of the role and significance of these transcripts. Experimental studies based on molecular changes to primary transcripts and the processes before and after transcription have led to greater understanding of diseases involving primary transcripts.
The steps contributing to the production of primary transcripts involve a series of molecular interactions that initiate transcription of DNA within a cell's nucleus. Based on the needs of a given cell, certain DNA sequences are transcribed to produce a variety of RNA products to be translated into functional proteins for cellular use. To initiate the transcription process in a cell's nucleus, DNA double helices are unwound and hydrogen bonds connecting compatible nucleic acids of DNA are broken to produce two unconnected single DNA strands. [ 1 ] One strand of the DNA template is used for transcription of the single-stranded primary transcript mRNA. This DNA strand is bound by an RNA polymerase at the promoter region of the DNA. [ 2 ]
In eukaryotes, three kinds of RNA— rRNA , tRNA , and mRNA—are produced based on the activity of three distinct RNA polymerases, whereas, in prokaryotes , only one RNA polymerase exists to create all kinds of RNA molecules. [ 3 ] RNA polymerase II of eukaryotes transcribes the primary transcript, a transcript destined to be processed into mRNA, from the antisense DNA template in the 5' to 3' direction, and this newly synthesized primary transcript is complementary to the antisense strand of DNA. [ 1 ] RNA polymerase II constructs the primary transcript using a set of four specific ribonucleoside monophosphate residues ( adenosine monophosphate (AMP), cytidine monophosphate (CMP), guanosine monophosphate (GMP), and uridine monophosphate (UMP)) that are added continuously to the 3' hydroxyl group on the 3' end of the growing mRNA. [ 1 ]
Studies of primary transcripts produced by RNA polymerase II reveal that an average primary transcript is 7,000 nucleotides in length, with some growing as long as 20,000 nucleotides in length. [ 2 ] The inclusion of both exon and intron sequences within primary transcripts explains the size difference between larger primary transcripts and smaller, mature mRNA ready for translation into protein. [ citation needed ]
A number of factors contribute to the activation and inhibition of transcription and therefore regulate the production of primary transcripts from a given DNA template. [ citation needed ]
Activation of RNA polymerase activity to produce primary transcripts is often controlled by sequences of DNA called enhancers . Transcription factors , proteins that bind to DNA elements to either activate or repress transcription, bind to enhancers and recruit enzymes that alter nucleosome components, causing DNA to be either more or less accessible to RNA polymerase. The unique combinations of either activating or inhibiting transcription factors that bind to enhancer DNA regions determine whether or not the gene that enhancer interacts with is activated for transcription or not. [ 4 ] Activation of transcription depends on whether or not the transcription elongation complex, itself consisting of a variety of transcription factors, can induce RNA polymerase to dissociate from the Mediator complex that connects an enhancer region to the promoter. [ 4 ]
Inhibition of RNA polymerase activity can also be regulated by DNA sequences called silencers . Like enhancers, silencers may be located at locations farther up or downstream from the genes they regulate. These DNA sequences bind to factors that contribute to the destabilization of the initiation complex required to activate RNA polymerase, and therefore inhibit transcription. [ 5 ]
Histone modification by transcription factors is another key regulatory factor for transcription by RNA polymerase. In general, factors that lead to histone acetylation activate transcription while factors that lead to histone deacetylation inhibit transcription. [ 6 ] Acetylation of histones induces repulsion between negative components within nucleosomes, allowing for RNA polymerase access. Deacetylation of histones stabilizes tightly coiled nucleosomes, inhibiting RNA polymerase access. In addition to acetylation patterns of histones, methylation patterns at promoter regions of DNA can regulate RNA polymerase access to a given template. RNA polymerase is often incapable of synthesizing a primary transcript if the targeted gene's promoter region contains specific methylated cytosines— residues that hinder binding of transcription-activating factors and recruit other enzymes to stabilize a tightly bound nucleosome structure, excluding access to RNA polymerase and preventing the production of primary transcripts. [ 4 ]
R-loops are formed during transcription. An R-loop is a three-stranded nucleic acid structure containing a DNA-RNA hybrid region and an associated non-template single-stranded DNA. Actively transcribed regions of DNA often form R-loops that are vulnerable to DNA damage . Introns reduce R-loop formation and DNA damage in highly expressed yeast genes. [ 7 ]
DNA damages arise in each cell, every day, with the number of damages in each cell reaching tens to hundreds of thousands, and such DNA damages can impede primary transcription. [ 8 ] The process of gene expression itself is a source of endogenous DNA damages resulting from the susceptibility of single-stranded DNA to damage. [ 8 ] Other sources of DNA damage are conflicts of the primary transcription machinery with the DNA replication machinery, and the activity of certain enzymes such as topoisomerases and base excision repair enzymes. Even though these processes are tightly regulated and are usually accurate, occasionally they can make mistakes and leave behind DNA breaks that drive chromosomal rearrangements or cell death . [ 8 ]
Transcription, a highly regulated phase in gene expression, produces primary transcripts. However, transcription is only the first step which should be followed by many modifications that yield functional forms of RNAs. [ 9 ] Otherwise stated, the newly synthesized primary transcripts are modified in several ways to be converted to their mature, functional forms to produce different proteins and RNAs such as mRNA, tRNA, and rRNA. [ citation needed ]
The basic primary transcript modification process is similar for tRNA and rRNA in both eukaryotic and prokaryotic cells. On the other hand, primary transcript processing varies in mRNAs of prokaryotic and eukaryotic cells. [ 9 ] For example, some prokaryotic bacterial mRNAs serve as templates for synthesis of proteins at the same time they are being produced via transcription. Alternatively, pre-mRNA of eukaryotic cells undergo a wide range of modifications prior to their transport from the nucleus to cytoplasm where their mature forms are translated. [ 9 ] These modifications are responsible for the different types of encoded messages that lead to translation of various types of products. Furthermore, primary transcript processing provides a control for gene expression as well as a regulatory mechanism for the degradation rates of mRNAs. The processing of pre-mRNA in eukaryotic cells includes 5' capping , 3' polyadenylation , and alternative splicing . [ citation needed ]
Shortly after transcription is initiated in eukaryotes, a pre-mRNA's 5' end is modified by the addition of a 7-methylguanosine cap , also known as a 5' cap. [ 9 ] The 5' capping modification is initiated by the addition of a GTP to the 5' terminal nucleotide of the pre-mRNA in reverse orientation followed by the addition of methyl groups to the G residue. [ 9 ] 5' capping is essential for the production of functional mRNAs since the 5' cap is responsible for aligning the mRNA with the ribosome during translation. [ 9 ]
In eukaryotes, polyadenylation further modifies pre-mRNAs during which a structure called the poly-A tail is added. [ 9 ] Signals for polyadenylation, which include several RNA sequence elements, are detected by a group of proteins which signal the addition of the poly-A tail (approximately 200 nucleotides in length). The polyadenylation reaction provides a signal for the end of transcription and this reaction ends approximately a few hundred nucleotides downstream from the poly-A tail location. [ 9 ]
Eukaryotic pre-mRNAs have their introns spliced out by spliceosomes made up of small nuclear ribonucleoproteins . [ 10 ] [ 11 ]
In complex eukaryotic cells, one primary transcript is able to prepare large amounts of mature mRNAs due to alternative splicing. Alternative splicing is regulated so that each mature mRNA may encode a multiplicity of proteins.
The effect of alternative splicing in gene expression can be seen in complex eukaryotes which have a fixed number of genes in their genome yet produce much larger numbers of different gene products. [ 9 ] Most eukaryotic pre-mRNA transcripts contain multiple introns and exons. The various possible combinations of 5' and 3' splice sites in a pre-mRNA can lead to different excision and combination of exons while the introns are eliminated from the mature mRNA. Thus, various kinds of mature mRNAs are generated. [ 9 ] Alternative splicing takes place in a large protein complex called the spliceosome . Alternative splicing is crucial for tissue-specific and developmental regulation in gene expression. [ 9 ] Alternative splicing can be affected by various factors, including mutations such as chromosomal translocation .
In prokaryotes, splicing is done by autocatalytic cleavage or by endolytic cleavage. Autocatalytic cleavages, in which no proteins are involved, are usually reserved for sections that code for rRNA, whereas endolytic cleavage corresponds to tRNA precursors.
5- Fluorouracil (FUra) exposure in methotrexate -resistant KB cells led to a two-fold reduction in total dihydrofolate reductase (DHFR) mRNA levels, while the level of DHFR pre-mRNA with certain introns remained unaffected. The half-life of DHFR mRNA or pre-mRNA did not change significantly, but the transition rate of DHFR RNA from the nucleus to the cytoplasm decreased, suggesting that FUra may influence mRNA processing and/or nuclear DHFR mRNA stability. [ 12 ]
In Drosophila and Aedes , hnRNA (pre-mRNA) size was larger in Aedes due to its larger genome, despite both species producing mature mRNA of similar size and sequence complexity. This indicates that hnRNA size increases with genome size. [ 13 ]
In HeLa cells , spliceosome groups on pre-mRNA were found to form within nuclear speckles , with this formation being temperature-dependent and influenced by specific RNA sequences. Pre-mRNA targeting and splicing factor loading in speckles were critical for spliceosome group formation, resulting in a speckled pattern. [ 14 ]
Recruiting pre-mRNA to nuclear speckles significantly increased splicing efficiency and protein levels, indicating that proximity to speckles enhances splicing efficiency. [ 15 ]
Research has also led to greater knowledge about certain diseases related to changes within primary transcripts. One study involved estrogen receptors and differential splicing. The article entitled, "Alternative splicing of the human estrogen receptor alpha primary transcript: mechanisms of exon skipping" by Paola Ferro, Alessandra Forlani, Marco Muselli and Ulrich Pfeffer from the laboratory of Molecular Oncology at National Cancer Research Institute in Genoa, Italy, explains that 1785 nucleotides of the region in the DNA that codes for the estrogen receptor alpha (ER-alpha) are spread over a region that holds more than 300,000 nucleotides in the primary transcript. Splicing of this pre-mRNA frequently leads to variants or different kinds of the mRNA lacking one or more exons or regions necessary for coding proteins. These variants have been associated with breast cancer progression. [ 16 ] In the life cycle of retroviruses , proviral DNA is incorporated in transcription of the DNA of the cell being infected. Since retroviruses need to change their pre-mRNA into DNA so that this DNA can be integrated within the DNA of the host it is affecting, the formation of that DNA template is a vital step for retrovirus replication. Cell type, the differentiation or changed state of the cell, and the physiological state of the cell, result in a significant change in the availability and activity of certain factors necessary for transcription. These variables create a wide range of viral gene expression. For example, tissue culture cells actively producing infectious virions of avian or murine leukemia viruses (ASLV or MLV) contain such high levels of viral RNA that 5–10% of the mRNA in a cell can be of viral origin. This shows that the primary transcripts produced by these retroviruses do not always follow the normal path to protein production and convert back into DNA in order to multiply and expand. [ 17 ] | https://en.wikipedia.org/wiki/Precursor_mRNA |
The term predation rate refers to the frequency with which an organism captures and consumes its prey in an ecosystem. Coupled with the kill rate , the predation rate drives the population dynamics of predation. [ 1 ] This statistic is related to Predator–prey dynamics and may be influenced by several factors.
In order for predation to occur, a predator and its prey must encounter one another. A low concentration of prey decreases the likelihood of such encounters. The prey encounter rate is determined by the abundance of organisms and a predator’s ability to locate its prey. [ 2 ] Covering more territory increases the likelihood that a predator will meet its prey. In areas of low prey density, predators are adapted to be more motile, engage in filter feeding , or use attractants such as chemical lures. [ 3 ]
If predation increased simply with prey concentration, the relationship would be linear until a limit is reached. This scenario is represented by Holling's type I functional response , which is rarely observed in nature. [ 4 ] Several factors affect this relationship, including handling time (the time required for a predator to consume its prey), selective feeding behaviors, and learning. [ 5 ] In contrast, Holling's type II and type III functional responses account for the time predators spend handling prey and the reduced efficiency in locating prey at low densities. [ 6 ]
Predation rate is also influenced by spatial and temporal mismatch. An extreme example occurred in the Arctic in May of 2021 and 2022, when large blooms of Phytoplankton were observed alongside low concentrations of grazers. [ 7 ] As the phytoplankton bloomed and died, the energy was not transferred into the Food web . Although primary production was high, the food web experienced an energy deficit. Spatial mismatch is particularly concerning under Climate change , as changing environmental parameters—such as rising Sea surface temperature and alterations in terrestrial habitats (e.g., loss of Tundra and melting Sea ice )—can create conditions that are no longer conducive to the populations they once supported [ 8 ] | https://en.wikipedia.org/wiki/Predation_rates |
Predatory dinoflagellates are predatory heterotrophic or mixotrophic alveolates that derive some or most of their nutrients from digesting other organisms. About one half of dinoflagellates lack photosynthetic pigments and specialize in consuming other eukaryotic cells, and even photosynthetic forms are often predatory. [ 1 ] [ 2 ]
Organisms that derive their nutrition in this manner include Oxyrrhis marina , which feeds phagocytically on phytoplankton , [ 3 ] Polykrikos kofoidii , which feeds on several species of red-tide and/or toxic dinoflagellates, [ 4 ] Ceratium furca , which is primarily photosynthetic but also capable of ingesting other protists such as ciliates, [ 5 ] Cochlodinium polykrikoides , which feeds on phytoplankton, [ 6 ] Gambierdiscus toxicus , which feeds on algae and produces a toxin that causes ciguatera fish poisoning when ingested, [ 7 ] and Pfiesteria and related species such as Luciella masanensis , which feed on diverse prey including fish skin and human blood cells. [ 8 ] [ 9 ] Predatory dinoflagellates can kill their prey by releasing toxins or phagocytize small prey directly. [ 10 ]
Some predatory algae have evolved extreme survival strategies. For example, Oxyrrhis marina can turn cannibalistic on its own species when no suitable non-self prey is available, [ 11 ] and Pfiesteria and related species have been discovered to kill and feed on fish, and since have been (mistakenly) referred to as carnivorous "algae" by the media.
The media has applied the term carnivorous or predatory algae mainly to Pfiesteria piscicida , Pfiesteria shumwayae and other Pfiesteria -like dinoflagellates implicated in harmful algal blooms and fish kills . [ 12 ] [ 13 ] Pfiesteria is named after the American protistologist Lois Ann Pfiester . It is an ambush predator that utilizes a hit and run feeding strategy by releasing a toxin that paralyzes the respiratory systems of susceptible fish, such as menhaden , thus causing death by suffocation . It then consumes the tissue sloughed off its dead prey. [ 14 ] Pfiesteria piscicida ( Latin : fish killer ) has been blamed for killing more than one billion fish in the Neuse and Pamlico river estuaries in North Carolina and causing skin lesions in humans in the 1990s. [ 13 ] It has been described as "skinning fish alive to feed on their flesh" [ 13 ] or chemically sensing fish and producing lethal toxins to kill their prey and feed off the decaying remains. [ 12 ] Its deadly nature has led to Pfiesteria being referred to as "killer algae" [ 15 ] [ 16 ] and has earned the organism the reputation as the " T. rex of the dinoflagellate world" [ 17 ] or "the Cell from Hell." [ 18 ]
The prominent and exaggerating media coverage of Pfiesteria as carnivorous algae attacking fish and humans has been implicated in causing " Pfiesteria hysteria" in the Chesapeake Bay in 1997 resulting in an apparent outbreak of human illness in the Pocomoke region in Maryland . [ 19 ] However, a study published the following year concluded the symptoms were unlikely to be caused by mass hysteria . [ 20 ]
During the media coverage in the 1990s, Pfiesteria has been referred to as "super villain" [ 16 ] and subsequently has been used as such in several fictional works. A Pfiesteria subspecies killing humans featured in James Powlik 's 1999 environmental thriller Sea Change . In Frank Schätzing 's 2004 science fiction novel The Swarm , lobsters and crabs spread the killer alga Pfiesteria homicida to humans.
In Yann Martel 's 2001 novel Life of Pi , the protagonist encounters a floating island of carnivorous algae inhabited by meerkats while shipwrecked in the Pacific Ocean . At a book reading in Calgary, Alberta , Canada , Martel explained that the carnivorous algae island had the purpose of representing the more fantastical of two competing stories in his novel and challenge the reader to a "leap of faith." [ 21 ]
In the 2005 National Geographic TV show Extraterrestrial , the alien organism termed Hysteria combines characteristics of Pfiesteria with those of cellular slime molds . Like Pfiesteria , Hysteria is a unicellular, microscopic predator capable of producing a paralytic toxin. Like cellular slime molds, it can release chemical stress signals that cause the cells to aggregate into a swarm which allows the newly formed superorganism to feed on much larger animals and produce a fruiting body that releases spores for reproduction. [ 22 ] | https://en.wikipedia.org/wiki/Predatory_dinoflagellate |
Predicable (Lat. praedicabilis, that which may be stated or affirmed, sometimes called quinque voces or five words ) is, in scholastic logic , a term applied to a classification of the possible relations in which a predicate may stand to its subject . It is not to be confused with ' praedicamenta ', the scholastics' term for Aristotle 's ten Categories .
The list given by the scholastics and generally adopted by modern logicians is based on development of the original fourfold classification given by Aristotle ( Topics , a iv. 101 b 17-25): definition ( horos ), genus ( genos ), property ( idioma ), and accident ( symbebekos ). The scholastic classification, obtained from Boethius 's Latin version of Porphyry 's Isagoge , modified Aristotle's by substituting species ( eidos ) and difference ( diaphora ) for definition. Both classifications are of universals , concepts or general terms , proper names of course being excluded. There is, however, a radical difference between the two systems. The standpoint of the Aristotelian classification is the predication of one universal concerning another. The Porphyrian, by introducing species, deals with the predication of universals concerning individuals (for species is necessarily predicated of the individual), and thus created difficulties from which the Aristotelian is free (see below).
The Aristotelian treatment considered:
This classification, though it is of high value in the clearing up of our conceptions of the essential contrasted with the accidental, the relation of the genus, differentia and definition and so forth, is of more significance in connection with abstract sciences, especially mathematics, than for the physical sciences. It is superior on the whole to the Porphyrian scheme, which has grave defects. As has been said, the Porphyrian scheme classifies universals as predicates of individuals and thus involves the difficulties which gave rise to the controversy between realism and nominalism . How are we to distinguish species from the genus? Napoleon was a Frenchman, a man, an animal. In the second place, how do we distinguish property and accident? Many so-called accidents are predicable necessarily of any particular persons. This difficulty gave rise to the distinction of separable and inseparable accidents, which is one of considerable difficulty.
Some Aristotelian examples may be briefly mentioned. In the true statement “Man is a rational animal,” the predicate is convertible with the subject and states its essence; therefore, “rational animal” is the definition of a man. The statements “Man is an animal” and “Man is rational,” while true, are not convertible; their predicate terms, however, are parts of the definition and hence are the genus and differentia of man. On the other hand, the statement “Man is capable of learning grammar” is true and convertible; but “capable of learning grammar” does not state the essence of man and is, therefore, a property of man. The true statement “Man is featherless” offers an example of an accident. Its predicate is not convertible with its subject, nor is it part of the definition; accordingly, it expresses only an accidental characteristic of man. [ 1 ] | https://en.wikipedia.org/wiki/Predicable |
In logic , a predicate is a symbol that represents a property or a relation. For instance, in the first-order formula P ( a ) {\displaystyle P(a)} , the symbol P {\displaystyle P} is a predicate that applies to the individual constant a {\displaystyle a} . Similarly, in the formula R ( a , b ) {\displaystyle R(a,b)} , the symbol R {\displaystyle R} is a predicate that applies to the individual constants a {\displaystyle a} and b {\displaystyle b} .
According to Gottlob Frege , the meaning of a predicate is exactly a function from the domain of objects to the truth values "true" and "false".
In the semantics of logic , predicates are interpreted as relations . For instance, in a standard semantics for first-order logic, the formula R ( a , b ) {\displaystyle R(a,b)} would be true on an interpretation if the entities denoted by a {\displaystyle a} and b {\displaystyle b} stand in the relation denoted by R {\displaystyle R} . Since predicates are non-logical symbols , they can denote different relations depending on the interpretation given to them. While first-order logic only includes predicates that apply to individual objects, other logics may allow predicates that apply to collections of objects defined by other predicates.
A predicate is a statement or mathematical assertion that contains variables, sometimes referred to as predicate variables, and may be true or false depending on those variables’ value or values. | https://en.wikipedia.org/wiki/Predicate_(logic) |
In logic , predicate abstraction is the result of creating a predicate from a formula . If Q is any formula then the predicate abstract formed from that sentence is (λx.Q), where λ is an abstraction operator and in which every occurrence of x that is free in Q is bound by λ in (λx.Q). The resultant predicate (λx.Q(x)) is a monadic predicate capable of taking a term t as argument as in (λx.Q(x))(t), which says that the object denoted by 't' has the property of being such that Q.
The law of abstraction states ( λx.Q(x) )(t) ≡ Q(t/x) where Q(t/x) is the result of replacing all free occurrences of x in Q by t. This law is shown to fail in general in at least two cases: (i) when t is irreferential and (ii) when Q contains modal operators .
In modal logic the " de re / de dicto distinction" is stated as
1. (DE DICTO): ◻ A ( t ) {\displaystyle \Box A(t)}
2. (DE RE): ( λ x . ◻ A ( x ) ) ( t ) {\displaystyle (\lambda x.\Box A(x))(t)} .
In (1) the modal operator applies to the formula A(t) and the term t is within the scope of the modal operator. In (2) t is not within the scope of the modal operator.
For the semantics and further philosophical developments of predicate abstraction see Fitting and Mendelsohn, First-order Modal Logic , Springer , 1999.
This semantics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Predicate_abstraction |
First-order logic , also called predicate logic , predicate calculus , or quantificational logic , is a collection of formal systems used in mathematics , philosophy , linguistics , and computer science . First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x , if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man " and "... is mortal " are predicates. [ 1 ] This distinguishes it from propositional logic , which does not use quantifiers or relations ; [ 2 ] : 161 in this sense, propositional logic is the foundation of first-order logic.
A theory about a topic, such as set theory , a theory for groups, [ 3 ] or a formal theory of arithmetic , is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic.
The term "first-order" distinguishes first-order logic from higher-order logic , in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted. [ 4 ] : 56 In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.
There are many deductive systems for first-order logic which are both sound , i.e. all provable statements are true in all models; and complete , i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable , much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory , such as the Löwenheim–Skolem theorem and the compactness theorem .
First-order logic is the standard for the formalization of mathematics into axioms , and is studied in the foundations of mathematics . Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line . Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic .
The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce . [ 5 ] For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001).
While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification . A predicate evaluates to true or false for an entity or entities in the domain of discourse .
Consider the two sentences " Socrates is a philosopher" and " Plato is a philosopher". In propositional logic , these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q . They are not viewed as an application of a predicate, such as isPhil {\displaystyle {\text{isPhil}}} , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. [ 6 ] However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form isPhil ( x ) {\displaystyle {\text{isPhil}}(x)} for some individual x {\displaystyle x} , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic. [ 7 ] : 29–30
The truth of a formula such as " x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, " x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. [ 8 ] Relationships between predicates can be stated using logical connectives . For example, the first-order formula "if x is a philosopher, then x is a scholar", is a conditional statement with " x is a philosopher" as its hypothesis, and " x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value.
Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x , if x is a philosopher, then x is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x .
The negation of the sentence "For every x , if x is a philosopher, then x is a scholar" is logically equivalent to the sentence "There exists x such that x is a philosopher and x is not a scholar". The existential quantifier "there exists" expresses the idea that the claim " x is a philosopher and x is not a scholar" holds for some choice of x .
The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables.
An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic ." It is true, as witnessed by Plato in that text. [ clarification needed ]
There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions.
Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed . There are two key types of well-formed expressions: terms , which intuitively represent objects, and formulas , which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of symbols , where all the symbols together form the alphabet of the language.
As with all formal languages , the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols.
It is common to divide the symbols of the alphabet into logical symbols , which always have the same meaning, and non-logical symbols , whose meaning varies by interpretation. [ 9 ] For example, the logical symbol ∧ {\displaystyle \land } always represents "and"; it is never interpreted as "or", which is represented by the logical symbol ∨ {\displaystyle \lor } . However, a non-logical predicate symbol such as Phil( x ) could be interpreted to mean " x is a philosopher", " x is a man named Philip", or any other unary predicate depending on the interpretation at hand.
Logical symbols are a set of characters that vary by author, but usually include the following: [ 10 ]
Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices.
Other logical symbols include the following:
Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes:
When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted.
In this traditional approach, there is only one language of first-order logic. [ 13 ] This approach is still common, especially in philosophically oriented books.
A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature . [ 14 ]
Typical signatures in mathematics are {1, ×} or just {×} for groups , [ 3 ] or {0, 1, +, ×, <} for ordered fields . There are no restrictions on the number of non-logical symbols. The signature can be empty , finite, or infinite, even uncountable . Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem .
Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics.
In this approach, every non-logical symbol is of one of the following types:
The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols.
The formation rules define the terms and formulas of first-order logic. [ 16 ] When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms .
The set of terms is inductively defined by the following rules: [ 17 ]
Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term.
The set of formulas (also called well-formed formulas [ 18 ] or WFFs ) is inductively defined by the following rules:
Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas .
For example:
is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, ∀ x x → {\displaystyle \forall x\,x\rightarrow } is not a formula, although it is a string of symbols from the alphabet.
The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability.
For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is:
Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula:
might be written as:
In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x , each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears. [ 8 ] p.297 Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either ∃ x {\displaystyle \exists x} or ∀ x {\displaystyle \forall x} . Finally, x is bound in φ if all occurrences of x in φ are bound. [ 8 ] pp.142--143
Intuitively, a variable symbol is free in a formula if at no point is it quantified: [ 8 ] pp.142--143 in ∀ y P ( x , y ) , the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows.
For example, in ∀ x ∀ y ( P ( x ) → Q ( x , f ( x ), z )) , x and y occur only bound, [ 19 ] z occurs only free, and w is neither because it does not occur in the formula.
Free and bound variables of a formula need not be disjoint sets: in the formula P ( x ) → ∀ x Q ( x ) , the first occurrence of x , as argument of P , is free while the second one, as argument of Q , is bound.
A formula in first-order logic with no free variable occurrences is called a first-order sentence . These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil( x ) is true must depend on what x represents. But the sentence ∃ x Phil( x ) will be either true or false in a given interpretation.
In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then:
The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written ( ∀ x ) ( ∀ y ) [ x + y = y + x ] . {\displaystyle (\forall x)(\forall y)[x+y=y+x].}
An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics . What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic , but aside from requiring the axiom of choice , game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.)
The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model ; see below). The structure consists of a domain of discourse D and an interpretation function I mapping non-logical symbols to predicates, functions, and constants.
The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, ∃ x P ( x ) {\displaystyle \exists xP(x)} states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers .
Non-logical symbols are interpreted as follows:
A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as y = x {\displaystyle y=x} . The truth value of this formula changes depending on the values that x and y denote.
First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment:
Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema .
If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and μ {\displaystyle \mu } if and only if it is true according to M and every other variable assignment μ ′ {\displaystyle \mu '} .
There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M , one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M ; say that for each d in the domain the constant symbol c d is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows:
This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments.
If a sentence φ evaluates to true under a given interpretation M , one says that M satisfies φ; this is denoted [ 20 ] M ⊨ φ {\displaystyle M\vDash \varphi } . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol ⊨ {\displaystyle \vDash } from model theory, where M ⊨ ϕ {\displaystyle M\vDash \phi } denotes satisfiability in a model, i.e. "there is a suitable assignment of values in M {\displaystyle M} 's domain to variable symbols of ϕ {\displaystyle \phi } ". [ 21 ]
Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables x 1 {\displaystyle x_{1}} , ..., x n {\displaystyle x_{n}} is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables x 1 {\displaystyle x_{1}} , ..., x n {\displaystyle x_{n}} . This has the same effect as saying that a formula φ is satisfied if and only if its universal closure ∀ x 1 … ∀ x n ϕ ( x 1 , … , x n ) {\displaystyle \forall x_{1}\dots \forall x_{n}\phi (x_{1},\dots ,x_{n})} is satisfied.
A formula is logically valid (or simply valid ) if it is true in every interpretation. [ 22 ] These formulas play a role similar to tautologies in propositional logic.
A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.
An alternate approach to the semantics of first-order logic proceeds via abstract algebra . This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators:
These algebras are all lattices that properly extend the two-element Boolean algebra .
Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra . [ 23 ] : 32–33 This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory , including the canonical Zermelo–Fraenkel set theory (ZFC). They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions . [ 24 ] : 803
A first-order theory of a particular signature is a set of axioms , which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable , in which case the theory is called effective . Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived.
A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory .
Many theories have an intended interpretation , a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models .
A theory is consistent (within a deductive system ) if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete.
The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic , where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset ), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class.
There are several difficulties with empty domains, however:
Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition.
A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems , natural deduction , the sequent calculus , the tableaux method , and resolution . These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs but are completely formalized unlike natural-language mathematical proofs .
A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective .
A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area.
In general, logical consequence in first-order logic is only semidecidable : if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B.
A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion.
For example, one common rule of inference is the rule of substitution . If t is a term and φ is a formula possibly containing the variable x , then φ[ t / x ] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t , one can conclude φ[ t / x ] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t .)
To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by ∃ x ( x = y ) {\displaystyle \exists x(x=y)} , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[ t / y ] is ∃ x ( x = x + 1 ) {\displaystyle \exists x(x=x+1)} , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z , so that the formula after substitution is ∃ z ( z = x + 1 ) {\displaystyle \exists z(z=x+1)} , which is again logically valid.
The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule.
A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom , a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference.
Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof.
The sequent calculus was developed to study the properties of natural deduction systems. [ 25 ] Instead of working with one formula at a time, it uses sequents , which are expressions of the form:
where A 1 , ..., A n , B 1 , ..., B k are formulas and the turnstile symbol ⊢ {\displaystyle \vdash } is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that ( A 1 ∧ ⋯ ∧ A n ) {\displaystyle (A_{1}\land \cdots \land A_{n})} implies ( B 1 ∨ ⋯ ∨ B k ) {\displaystyle (B_{1}\lor \cdots \lor B_{k})} .
Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has ¬ A {\displaystyle \lnot A} at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that C ∨ D {\displaystyle C\lor D} is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent C ∨ D {\displaystyle C\lor D} and children C and D.
The resolution rule is a single rule of inference that, together with unification , is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving.
The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization . The resolution rule states that from the hypotheses A 1 ∨ ⋯ ∨ A k ∨ C {\displaystyle A_{1}\lor \cdots \lor A_{k}\lor C} and B 1 ∨ ⋯ ∨ B l ∨ ¬ C {\displaystyle B_{1}\lor \cdots \lor B_{l}\lor \lnot C} , the conclusion A 1 ∨ ⋯ ∨ A k ∨ B 1 ∨ ⋯ ∨ B l {\displaystyle A_{1}\lor \cdots \lor A_{k}\lor B_{1}\lor \cdots \lor B_{l}} can be obtained.
Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form . Some provable identities include:
There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality , includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are: [ 26 ] : 198–200
These are axiom schemas , each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law , "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f , is (equivalent to) a special case of the third schema, using the formula:
Then
Since x = y is given, and f (..., x , ...) = f (..., x , ...) true by reflexivity, we have f (..., x , ...) = f (..., y , ...)
Many other properties of equality are consequences of the axioms above, for example:
An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality . If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation.
When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b . In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered.
First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted.
If a theory has a binary formula A ( x , y ) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument.
Some theories allow other ad hoc definitions of equality:
One motivation for the use of first-order logic, rather than higher-order logic , is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories.
Gödel's completeness theorem , proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable : it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ.
Unlike propositional logic , first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem .
There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic , which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic . The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics .
The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory , it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ( x ) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable).
The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models . When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox .
The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. [ 29 ] This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models.
The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures).
There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ( x , y ) of first-order logic, in the logic of graphs , that expresses the idea that there is a path from x to y . Connectedness can be expressed in second-order logic , however, but not with only existential set quantifiers, as Σ 1 1 {\displaystyle \Sigma _{1}^{1}} also enjoys compactness.
Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type:
Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe.
For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C 2 : first-order logic with two variables and the counting quantifiers ∃ ≥ n {\displaystyle \exists ^{\geq n}} and ∃ ≤ n {\displaystyle \exists ^{\leq n}} . [ 30 ]
The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical . Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers [ clarification needed ] . This expressiveness comes at a metalogical cost, however: by Lindström's theorem , the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order.
First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages , such as FO(.) .
Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic". [ 31 ]
There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity.
First-order logic can be studied in languages with fewer logical symbols than were described above:
Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system.
It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function . This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied.
Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts , which have different domains. This is also called typed first-order logic , and the sorts called types (as in data type ), but it is not the same as first-order type theory . Many-sorted first-order logic is often used in the study of second-order arithmetic . [ 33 ]
When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic. [ 34 ] : 296–299 One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols P 1 ( x ) {\displaystyle P_{1}(x)} and P 2 ( x ) {\displaystyle P_{2}(x)} and the axiom:
Then the elements satisfying P 1 {\displaystyle P_{1}} are thought of as elements of the first sort, and elements satisfying P 2 {\displaystyle P_{2}} as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula φ ( x ) {\displaystyle \varphi (x)} , one writes:
Additional quantifiers can be added to first-order logic.
Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory .
Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed.
The most commonly studied infinitary logics are denoted L αβ , where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is L ωω .
In the logic L ∞ω , arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as L κω . For example, L ω 1 ω permits countable conjunctions and disjunctions.
The set of free variables in a formula of L κω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. [ 35 ] In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in L κ∞ , a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic L κλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ.
Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators. [ 36 ]
The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus
is a legal first-order formula, but
is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified.
Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics . The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics.
Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics.
Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. [ 37 ] Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. [ 38 ]
The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct.
Some proof verifiers, such as Metamath , insist on having a complete derivation as input. Others, such as Mizar and Isabelle , take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants . They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, [ 39 ] results are often formalized as a series of lemmas, for which derivations can be constructed separately.
Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification . Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences.
For the problem of model checking , efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see Model checking § First-order logic . | https://en.wikipedia.org/wiki/Predicate_calculus |
Predictability is the degree to which a correct prediction or forecast of a system 's state can be made, either qualitatively or quantitatively.
Causal determinism has a strong relationship with predictability. Perfect predictability implies strict determinism, but lack of predictability does not necessarily imply lack of determinism. Limitations on predictability could be caused by factors such as a lack of information or excessive complexity.
In experimental physics, there are always observational errors determining variables such as positions and velocities. So perfect prediction is practically impossible. Moreover, in modern quantum mechanics , Werner Heisenberg 's indeterminacy principle puts limits on the accuracy with which such quantities can be known. So such perfect predictability is also theoretically impossible.
Laplace's demon is a supreme intelligence who could completely predict the one possible future given the Newtonian dynamical laws of classical physics and perfect knowledge of the positions and velocities of all the particles in the world. In other words, if it were possible to have every piece of data on every atom in the universe from the beginning of time, it would be possible to predict the behavior of every atom into the future. Laplace's determinism is usually thought to be based on his mechanics, but he could not prove mathematically that mechanics is deterministic. Rather, his determinism is based on general philosophical principles, specifically on the principle of sufficient reason and the law of continuity. [ 1 ]
Although the second law of thermodynamics can determine the equilibrium state that a system will evolve to, and steady states in dissipative systems can sometimes be predicted, there exists no general rule to predict the time evolution of systems distanced from equilibrium, e.g. chaotic systems , if they do not approach an equilibrium state. Their predictability usually deteriorates with time and to quantify predictability, the rate of divergence of system trajectories in phase space can be measured ( Kolmogorov–Sinai entropy , Lyapunov exponents ).
In stochastic analysis a random process is a predictable process if it is possible to know the next state from the present time.
The branch of mathematics known as Chaos Theory focuses on the behavior of systems that are highly sensitive to initial conditions. It suggests that a small change in an initial condition can completely alter the progression of a system. This phenomenon is known as the butterfly effect , which claims that a butterfly flapping its wings in Brazil can cause a tornado in Texas. The nature of chaos theory suggests that the predictability of any system is limited because it is impossible to know all of the minutiae of a system at the present time. In principle, the deterministic systems that chaos theory attempts to analyze can be predicted, but uncertainty in a forecast increases exponentially with elapsed time. [ 2 ]
As documented in, [ 3 ] three major kinds of butterfly effects within Lorenz studies include: the sensitive dependence on initial conditions, [ 4 ] [ 5 ] the ability of a tiny perturbation to create an organized circulation at large distances, [ 6 ] and the hypothetical role of small-scale processes in contributing to finite predictability. [ 7 ] [ 8 ] [ 9 ] The three kinds of butterfly effects are not exactly the same.
In the study of human–computer interaction , predictability is the property to forecast the consequences of a user action given the current state of the system.
A contemporary example of human-computer interaction manifests in the development of computer vision algorithms for collision-avoidance software in self-driving cars. Researchers at NVIDIA Corporation, [ 10 ] Princeton University, [ 11 ] and other institutions are leveraging deep learning to teach computers to anticipate subsequent road scenarios based on visual information about current and previous states.
Another example of human-computer interaction are computer simulations meant to predict human behavior based on algorithms. For example, MIT has recently developed an incredibly accurate algorithm to predict the behavior of humans. When tested against television shows, the algorithm was able to predict with great accuracy the subsequent actions of characters. Algorithms and computer simulations like these show great promise for the future of artificial intelligence. [ 12 ]
Linguistic prediction is a phenomenon in psycholinguistics occurring whenever information about a word or other linguistic unit is activated before that unit is actually encountered. Evidence from eyetracking , event-related potentials , and other experimental methods indicates that in addition to integrating each subsequent word into the context formed by previously encountered words, language users may, under certain conditions, try to predict upcoming words. Predictability has been shown to affect both text and speech processing, as well as speech production. Further, predictability has been shown to have an effect on syntactic, semantic and pragmatic comprehension.
In the study of biology – particularly genetics and neuroscience – predictability relates to the prediction of biological developments and behaviors based on inherited genes and past experiences.
Significant debate exists in the scientific community over whether or not a person's behavior is completely predictable based on their genetics. Studies such as the one in Israel, which showed that judges were more likely to give a lighter sentence if they had eaten more recently. [ 13 ] In addition to cases like this, it has been proven that individuals smell better to someone with complementary immunity genes, leading to more physical attraction. [ 14 ] Genetics can be examined to determine if an individual is predisposed to any diseases, and behavioral disorders can most often be explained by analyzing defects in genetic code. Scientist who focus on examples like these argue that human behavior is entirely predictable. Those on the other side of the debate argue that genetics can only provide a predisposition to act a certain way and that, ultimately, humans possess the free will to choose whether or not to act.
Animals have significantly more predictable behavior than humans. Driven by natural selection, animals develop mating calls, predator warnings, and communicative dances. One example of these engrained behaviors is the Belding's ground squirrel, which developed a specific set of calls that warn nearby squirrels about predators. If a ground squirrel sees a predator on land it will elicit a trill after it gets to safety, which signals to nearby squirrels that they should stand up on their hind legs and attempt to locate the predator. When a predator is seen in the air, a ground squirrel will immediately call out a long whistle, putting himself in danger but signaling for nearby squirrels to run for cover. Through experimentation and examination scientists have been able to chart behaviors like this and very accurately predict how animals behave in certain situations. [ 15 ]
The study of predictability often sparks debate between those who believe humans maintain complete control over their free-will and those who believe our actions are predetermined. However, it is likely that neither Newton nor Laplace saw the study of predictability as relating to determinism. [ 16 ]
As climate change and other weather phenomenon become more common, the predictability of climate systems becomes more important. The IPCC notes that our ability to predict future detailed climate interactions is difficult, however, long term climate forecasts are possible. [ 17 ] [ 18 ]
Over 50 years since Lorenz's 1963 study and a follow-up presentation in 1972, the statement “weather is chaotic” has been well accepted. [ 4 ] [ 5 ] Such a view turns our attention from regularity associated with Laplace's view of determinism to irregularity associated with chaos. In contrast to single-type chaotic solutions, recent studies using a generalized Lorenz model [ 19 ] have focused on the coexistence of chaotic and regular solutions that appear within the same model using the same modeling configurations but different initial conditions. [ 20 ] [ 21 ] The results, with attractor coexistence, suggest that the entirety of weather possesses a dual nature of chaos and order with distinct predictability. [ 22 ]
Using a slowly varying, periodic heating parameter within a generalized Lorenz model, Shen and his co-authors suggested a revised view: “The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons”. [ 23 ]
The spring predictability barrier refers to a period of time early in the year when making summer weather predictions about the El Niño–Southern Oscillation is difficult. It is unknown why it is difficult, although many theories have been proposed. There is some thought that the cause is due to the ENSO transition where conditions are more rapidly shifting. [ 24 ]
Predictability in macroeconomics refers most frequently to the degree to which an economic model accurately reflects quarterly data and the degree to which one might successfully identify the internal propagation mechanisms of models. Examples of US macroeconomic series of interest include but are not limited to Consumption, Investment, Real GNP, and Capital Stock. Factors that are involved in the predictability of an economic system include the range of the forecast (is the forecast two years "out" or twenty) and the variability of estimates. Mathematical processes for assessing the predictability of macroeconomic trends are still in development. [ 25 ] | https://en.wikipedia.org/wiki/Predictability |
A predictable serial number attack is a form of security exploit in which the algorithm for generating serial numbers for a particular purpose is guessed, discovered, or reverse engineered , a new serial number is predicted using the algorithm, and the newly generated serial number is then used for a fraudulent purpose, either to obtain an undeserved benefit or to deny service to the legitimate holder of the serial number.
Suppose there is a phone card available for sale that offers telephone service by entering the serial number printed on the card. Alice legitimately purchases a phone card in order to call Bob , and her card has the serial number 0003. The attacker, Mallory , also purchases two phone cards, and notices that the serial numbers printed on her phone cards are 0001 and 0002. After consuming the value on cards 0001 and 0002, Mallory guesses the algorithm used for generating these serial numbers is a simple sequence and predicts that 0003 is a valid serial number, enters 0003 when prompted, and gets additional phone service. When Alice tries to use her card she discovers the value has been stolen from it and it is now worthless.
A common approach to prevent predictable serial number attacks is to use a cryptographic hash function such as SHA-2 to generate the actual serial numbers. Internally, the issuing organization creates a (pseudo-) random nonce as a salt for generating the serial numbers, and keeps it secret. The issuer increments their internal serial number and appends it to the salt, and the computed message digest is used to create the actual serial number. The issuer does have to take care to prevent collisions between existing values so as not to wrongly issue two identical serial numbers. | https://en.wikipedia.org/wiki/Predictable_serial_number_attack |
The Predicted Aligned Error ( PAE ) is a quantitative output produced by AlphaFold , a protein structure prediction system developed by DeepMind . [ 1 ] PAE estimates the expected positional error for each residue in a predicted protein structure if it were aligned to a corresponding residue in the true protein structure. This measurement helps scientists assess the confidence in the relative positions and orientations of different parts of the predicted protein model. [ 2 ]
PAE is presented as a two-dimensional (2D) interactive plot where the color at coordinates (x, y) represents the predicted position error at residue x if the predicted and true structures were aligned on residue y . [ 3 ] Lower PAE values for residue pairs from different domains suggest well-defined relative positions and orientations in the prediction, while higher PAE values indicate uncertainty in the relative positions or orientations.
Users can download the raw PAE data for all residue pairs in a custom JSON format for further analysis or visualization using a programming language such as Python. The format of the JSON file is as follows:
In the JSON file, the field predicted_aligned_error provides the PAE value for each residue pair (rounded to the nearest integer), and the field max_predicted_aligned_error gives the maximum possible PAE value, which is capped at 31.75 Å. The PAE is measured in Ångströms.
A separately developed 3D viewer of PAE allows for more intuitive visualization. [ 4 ]
Interpretation of PAE values allows scientists to understand the level of confidence in the predicted structure of a protein:
Lower PAE values between residue pairs from different domains indicate that the model predicts well-defined relative positions and orientations for those domains.
Higher PAE values for such residue pairs suggest that the relative positions and/or orientations of these domains in the 3D structure are uncertain and should not be interpreted. [ 5 ]
Although PAE provides valuable information, users should note that it is asymmetric; the PAE value for (x, y) may differ from the value for (y, x), particularly between loop regions with highly uncertain orientations. [ 6 ] Moreover, while AlphaFold can make useful inter-domain predictions, intra-domain prediction accuracy is expected to be more reliable based on CASP14 validation. | https://en.wikipedia.org/wiki/Predicted_Aligned_Error |
Calculated value of a chemical in the environment on the basis of exposure models such as the European Union System for the Evaluation of Substances (EUSES). Used in the context of Chemical Safety Assessments (CSA) and referenced in Chemical Safety Reports (CSR).
PECs may be compared with Measured Environmental Concentrations (MEC) if available.
This toxicology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Predicted_environmental_concentration |
The predicted no-effect concentration ( PNEC ) is the concentration of a chemical which marks the limit at which below no adverse effects of exposure in an ecosystem are measured. PNEC values are intended to be conservative and predict the concentration at which a chemical will likely have no toxic effect. They are not intended to predict the upper limit of concentration of a chemical that has a toxic effect. [ 1 ] [ 2 ] [ 3 ] PNEC values are often used in environmental risk assessment as a tool in ecotoxicology . [ 1 ] [ 3 ] [ 4 ] A PNEC for a chemical can be calculated with acute toxicity or chronic toxicity single-species data, Species Sensitivity Distribution (SSD) multi-species data, field data or model ecosystems data. Depending on the type of data used, an assessment factor is used to account for the confidence of the toxicity data being extrapolated to an entire ecosystem. [ 3 ] [ 5 ]
The use of assessment factors allows for laboratory, single-species and short term toxicity data to be extrapolated to conservatively predict ecosystem effects and accounts for the uncertainty in the extrapolation. The value of the assessment factor is dependent on the uncertainty of the available data and ranges from 1-1000. [ 1 ] [ 6 ] [ 7 ]
Acute toxicity data includes LC50 and EC50 data. This data is frequently screened for quality, relevancy and ideally contains data for species in multiple trophic levels and/or taxonomic groups. [ 1 ] [ 6 ] The lowest LC50 in the compiled database is then divided by the assessment factor to calculate the PNEC for that data. The assessment factor applied to acute toxicity data is typically 1000. [ 1 ] [ 6 ] [ 7 ]
Chronic toxicity data includes NOEC data. The lowest NOEC value in the test dataset is divided by an assessment factor between 10 and 100 dependent on the diversity of test organisms and the amount of data available. If there are more species or data, the assessment factor is lower. [ 1 ] [ 7 ]
A PNEC may also be statistically derived from a SSD which is a model of the variability in the sensitivity of multiple species to a single toxicant or other stressor. [ 1 ] [ 8 ] [ 9 ] The hazardous concentration for five percent of the species (HC5) in the SSD is used to derive the PNEC. The HC5 is the concentration at which five percent of the species in the SSD exhibit an effect. [ 10 ] The HC5 is typically divided by an assessment factor of 1-5. [ 6 ] In many cases, SSDs may not exist due to the lack of data on a large number of species. In these cases, the assessment factor approach to derivation of a PNEC should be used. [ 1 ] [ 6 ]
Field data or model ecosystems data includes field toxicity data and mesocosm toxicity. The magnitude of the assessment factor is study-specific in these types of studies. [ 1 ] [ 7 ]
PNEC is used extensively in Europe by the European Chemicals Agency , the Registration, Evaluation, Authorisation and Restriction of Chemicals program and other toxicology agencies to assess environmental risk. [ 1 ] [ 6 ] [ 7 ] [ 11 ] [ 12 ] PNEC values can be used in conjunction with predicted environmental concentration values to calculate a risk characterization ratio (RCR), also called a Risk Quotient (RQ). RCR is equal to the PEC divided by the PNEC for a specific chemical and is a deterministic approach to estimating environmental risk at local or regional scales. [ 13 ] If the PNEC exceeds the PEC, the conclusion is that the chemical poses no environmental risk. [ 14 ]
Derivation of PNEC for use in environmental risk lacks some scientific validity because the assessment factors are derived empirically. [ 7 ] Additionally, PNECs derived from single-species toxicity data also assume that ecosystems are as sensitive as the most sensitive species and that the ecosystem function is dependent on the ecosystem structure. [ 1 ] | https://en.wikipedia.org/wiki/Predicted_no-effect_concentration |
The prediction of crystal properties by numerical simulation has become commonplace in the last 20 years as computers have grown more powerful and theoretical techniques more sophisticated. High accuracy prediction of elastic, electronic, transport and phase properties is possible with modern methods.
Ab initio or first principles calculations are any of a number of software packages making use of density functional theory to solve for the quantum mechanical state of a system. Perfect crystals are an ideal subject for such calculations because of their high periodicity. Since every simulation package will vary in the details of its algorithms and implementations, this page will focus on a methodological overview.
Density functional theory seeks to solve for an approximate form of the electronic density of a system. In general, atoms are split into ionic cores and valence electrons. The ionic cores (nuclei plus non-bonding electrons) are assumed to be stable and are treated as a single object. Each valence electron is treated separately. Thus, for example, a Lithium atom is treated as two bodies – Li+ and e- – while oxygen is treated as three bodies, namely O 2+ and 2e − .
The “true” ground state of a crystal system is generally unsolvable. However, the variational theorem assures us that any guess as to the electronic state function of a system will overestimate the ground state energy. Thus, by beginning with a suitably parametrized guess and minimizing the energy with respect to each of those parameters, an extremely accurate prediction may be made. The question as to what one's initial guess should be is a topic of active research. [ 1 ]
In the large majority of crystal systems, electronic relaxation times are orders of magnitude shorter than ionic relaxation times. Thus, an iterative scheme is adopted. First, the ions are considered fixed and the electronic state is relaxed by considering the ionic and electron-electron pair potentials. Next, the electronic states are considered fixed and the ions are allowed to move under the influence of the electronic and ion-ion pair potentials. When the decrease in energy between two iterative steps is sufficiently small, the structure of the crystal is considered solved.
A key choice that must be made is how many atoms to explicitly include in one's calculation. In Big-O notation , calculations general scale as O(N3) where N is the number of combined ions and valence electrons. [ 2 ] For structure calculations, it is generally desirable to choose the smallest number of ions that can represent the structure. For example, NaCl is a bcc cubic structure. At a first guess, one might construct a cell of two interlocked cubes – 8 Na and 8 Cl – as one's unit cell. This will give the correct answer but is computationally wasteful. By choosing appropriate coordinates, one might simulate it with just two atoms: 1 Na and 1 Cl.
Crystal structure calculations rely on periodic boundary conditions . That is, the assumption is that the cell you have chosen is in the midst of an infinite lattice of identical cells. By taking our 1 Na 1 Cl cell and copying it many times along each of the crystal axes, we will have simulated the same superstructure as our 8 Na 8 Cl cell but with much reduced computational cost.
Only a few lists of information will be output from a calculation, in general. For the ions, the position, velocity and net force on each ion are recorded at each step. For electrons, the guess as to the electronic state function may be recorded as well. Finally, the total energy of the system is recorded. From these three types of information, we may deduce a number of properties.
Unit cell parameters (a,b,c,α,β,γ) can be computed from the final relaxed positions of the ions. [ 3 ] In a NaCl calculation, the final position of the Na ion might be (0,0,0) in picometer Cartesian coordinates and the final position of the Cl ion might be (282,282,282). From this, we see that the lattice constant would be 584 pm. For non-orthorhombic systems, the determination of cell parameters might be more complicated, but many ab-initio numerical packages have utilities to make this calculation simpler.
Once the lattice cell parameters are known, patterns for single crystal or powder diffraction can be readily predicted via Bragg's Law . [ 4 ]
The temperature of the system can be estimated by use of the Equipartition Theorem , with three degrees of freedom for each ion. Since ionic velocities are generally recorded at each step in the numerical simulation, the average kinetic energy of each ion is easy to calculate. There exist schemes which attempt to control the temperature of the simulation by, e.g. enforcing each ion to have exactly the kinetic energy predicted by the Equipartition Theorem ( Berendsen thermostat ) or by allowing the system to exchange energy and momentum with a (more massive) fictitious enclosing system ( Nose-Hoover thermostat ).
The net force on each ion is generally calculated explicitly at each numerical step. From this, the stress tensor of the system can be calculated and usually is calculated by the numerical package. By varying the convergence criteria, one can either seek a lowest energy structure or a structure that produces a desired stress tensor. Thus, high pressures can be simulated as easily as ambient pressures. [ 5 ]
The Young's modulus of a mineral can be predicted by varying one cell parameter at a time and observing the evolution of the stress tensor. [ 6 ] Because the raw output of a simulation includes energy and volume, the integrated version of the Birch-Murnaghan equation of state is often used to determine bulk modulus .
The electronic density functional is explicitly used in the calculation of the electronic ground state. Packages such as VASP have an option to calculate the electronic density of states per eV to facilitate the prediction of conduction bands and band gaps . [ 7 ]
The Green-Kubo relations can be used to calculate the thermal transport properties of a mineral. Since the velocities of the ions are stored at each numerical step, one can calculate the time correlation of later velocities with earlier velocities. The integral of these correlations is related to the Fourier thermal coefficient.
By recording the ionic positions at each time step, one can observe how far, on average, each ion has moved from its original position. [ 8 ] The mean squared displacement of each ion type is related to the diffusion coefficient for a particle undergoing Brownian motion . | https://en.wikipedia.org/wiki/Prediction_of_crystal_properties_by_numerical_simulation |
A predictive adaptive response (PAR) is a developmental trajectory taken by an organism during a period of developmental plasticity in response to perceived environment al cues. [ 1 ] This PAR does not confer an immediate advantage to the developing organism; however, if the PAR correctly anticipates the postnatal environment it will be advantageous in later life, if the environment the organism is born into differs from that anticipated by the PAR it will result in a mismatch. [ 2 ] PAR mechanisms were first recognized in research done on human fetuses that investigated whether poor nutrition results in the inevitable diagnosis of Type 2 diabetes in later life. [ 3 ] PARs are thought to occur through epigenetic mechanisms that alter gene expression, such as DNA methylation and histone modification , and do not involve changes to the DNA sequence of the developing organism. [ 4 ] Examples of PARs include greater helmet development in Daphnia cucullata in response to maternal exposure to predator pheromones, [ 5 ] rats exposed to glucocorticoid during late gestation led to an intolerance to glucose as adults, [ 6 ] and coat thickness determination in vole pups by the photoperiod length experienced by the mother. [ 7 ] Two hypotheses to explain PAR are the " thrifty phenotype " hypothesis and the developmental plasticity hypothesis.
The thrifty phenotype hypothesis is the idea that if an organism suffers from inadequate nutrition in fetal development it will subsequently be predisposed to certain genetic outcomes as an adult. A study done examining glucose tolerance of individuals born during a famine in the Netherlands in 1944-1945 favors the “thrifty phenotype” hypothesis. [ 8 ] The results of the experiment showed that exposure to famine, particularly in late gestation, led to a decrease in the glucose tolerance of the adults. [ 8 ] Other studies on humans have shown cardiovascular and diabetes mortality has been shown to correspond to the nutrition uptake of the parents and grandparents of an offspring during their years before puberty, [ 9 ] hypertension in both sexes is the highest in individuals that had been small babies with large placentas, [ 10 ] and larger female babies have decreased ovarian suppression compared to smaller babies after intermediate levels of activity in adulthood. [ 11 ] All these studies support the thrifty phenotype hypothesis because the prenatal environment determined the phenotype that would be expressed later in life.
Another proposed hypothesis for the presence of PARs is the developmental plasticity hypothesis. A longitudinal study performed in Helsinki, Finland investigated whether catch-up growth of smaller children increased the risk of coronary heart disease later in life. [ 12 ] The results of this study coincide with the developmental plasticity hypothesis because as the nutrition of the small participants improved after birth, these undernourished small individuals grew at a quicker rate and had an increased chance of coronary heart disease. [ 12 ] Another study further confirms the longitudinal study performed in Finland by showing that low weight children develop visceral fat during the catch-up growth period which can potentially result in diabetes later in life. [ 13 ] Infants that have a low birth weight have been shown to have a reduction in functioning cells, which would instantly have a negative effect on their adult life. [ 14 ] Additionally, a study testing drastic changes in childhood body-mass index showed that after two years of age, thin infants who have a comparatively large body-mass index from their birth weight are associated with disorders such as diabetes. [ 15 ] The developmental plasticity hypothesis is apparent in each of these findings because the post birth development determines the health of the individual during adulthood.
Continued research into predictive adaptive responses has the potential to gain crucial insight into the reasons diseases such as diabetes are so widespread. [ 16 ] | https://en.wikipedia.org/wiki/Predictive_adaptive_response |
Predictive engineering analytics ( PEA ) is a development approach for the manufacturing industry that helps with the design of complex products (for example, products that include smart systems ). It concerns the introduction of new software tools, the integration between those, and a refinement of simulation and testing processes to improve collaboration between analysis teams that handle different applications. This is combined with intelligent reporting and data analytics. The objective is to let simulation drive the design, to predict product behavior rather than to react on issues which may arise, and to install a process that lets design continue after product delivery.
In a classic development approach, manufacturers deliver discrete product generations. Before bringing those to market, they use extensive verification and validation processes, usually by combining several simulation and testing technologies. But this approach has several shortcomings when looking at how products are evolving. Manufacturers in the automotive industry , the aerospace industry , the marine industry or any other mechanical industry all share similar challenges: they have to re-invent the way they design to be able to deliver what their customers want and buy today. [ 1 ]
Products include, besides the mechanics, ever more electronics, software and control systems . Those help to increase performance for several characteristics, such as safety, comfort, fuel economy and many more. Designing such products using a classic approach, is usually ineffective. A modern development process should be able to predict the behavior of the complete system for all functional requirements and including physical aspects from the very beginning of the design cycle. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
To achieve reduced costs or fuel economy, manufacturers need to continually consider adopting new materials and corresponding manufacturing methods. [ 10 ] [ 11 ] That makes product development more complex, as engineers cannot rely on their decades of experience anymore, like they did when working with traditional materials, such as steel and aluminium, and traditional manufacturing methods, such as casting . New materials such as composites , behave differently when it comes to structural behavior, thermal behavior, fatigue behavior or noise insulation for example, and require dedicated modeling.
On top of that, as design engineers do not always know all manufacturing complexities that come with using these new materials, it is possible that the "product as manufactured" is different from the "product as designed". Of course all changes need to be tracked, and possibly even an extra validation iteration needs to be done after manufacturing. [ 12 ] [ 13 ]
Today's products include many sensors that allow them to communicate with each other, and to send feedback to the manufacturer. Based on this information, manufacturers can send software updates to continue optimizing behavior, or to adapt to a changing operational environment. Products will create the internet of things , and manufacturers should be part of it. [ citation needed ] A product "as designed" is never finished, so development should continue when the product is in use. This evolution is also referred to as Industry 4.0 , [ 14 ] or the fourth industrial revolution. It challenges design teams, as they need to react quickly and make behavioral predictions based on an enormous amount of data. [ 15 ]
The ultimate intelligence a product can have, is that it remembers the individual behavior of its operator, and takes that into consideration. In this way, it can for example anticipate certain actions, predict failure or maintenance, or optimize energy consumption in a self-regulating manner. That requires a predictive model inside the product itself, or accessible via cloud. This one should run very fast and should behave exactly the same as the actual product. It requires the creation of a digital twin : a replica of the product that remains in-sync over its entire product lifecycle . [ 16 ] [ 17 ]
Consumers today can get easy access to products that are designed in any part of the world. That puts an enormous pressure on the time-to-market , the cost and the product quality. It's a trend which has been going on for decades. But with people making ever more buying decisions online, it has become more relevant than ever. Products can easily be compared in terms of price and features on a global scale. And reactions on forums and social media can be very grim when product quality is not optimal. This comes on top of the fact that in different parts of the world, consumer have different preferences, or even different standards and regulations are applicable.
As a result, modern development processes should be able to convert very local requirements into a global product definition, which then should be rolled out locally again, potentially with part of the work being done by engineers in local affiliates. That calls for a firm globally operating product lifecycle management system that starts with requirements definition. And the design process should have the flexibility to effectively predict product behavior and quality for various market needs. [ 18 ]
Dealing with these challenges is exactly the aim of a predictive engineering analytics approach for product development. It refers to a combination of tools deployment and a good alignment of processes. Manufacturers gradually deploy the following methods and technologies, to an extent that their organization allows it and their products require it:
In this multi-disciplinary simulation-based approach, the global design is considered as a collection of mutually interacting subsystems from the very beginning. From the very early stages on, the chosen architecture is virtually tested for all critical functional performance aspects simultaneously. These simulations use scalable modeling techniques, so that components can be refined as data becomes available. Closing the loop happens on 2 levels:
Closed-loop systems driven product development aims at reducing test-and-repair. Manufacturers implement this approach to pursue their dream of designing right the first time. [ 19 ] [ 20 ]
1D system simulation, also referred to as 1D CAE or mechatronics system simulation, allows scalable modeling of multi-domain systems. The full system is presented in a schematic way, by connecting validated analytical modeling blocks of electrical, hydraulic, pneumatic and mechanical subsystems (including control systems). It helps engineers predict the behavior of concept designs of complex mechatronics, either transient or steady-state .
Manufacturers often have validated libraries available that contain predefined components for different physical domains. Or if not, specialized software suppliers can provide them. Using those, the engineers can do concept predictions very early, even before any Computer-aided Design (CAD) geometry is available. During later stages, parameters can then be adapted.
1D system simulation calculations are very efficient. The components are analytically defined, and have input and output ports. Causality is created by connecting inputs of a components to outputs of another one (and vice versa). Models can have various degrees of complexity, and can reach very high accuracy as they evolve. Some model versions may allow real-time simulation , which is particularly useful during control systems development or as part of built-in predictive functionality.< [ 21 ]
3D simulation or 3D CAE is usually applied at a more advanced stage of product development than 1D system simulation, and can account for phenomena that cannot be captured in 1D models. The models can evolve into highly detailed representations that are very application-specific and can be very computationally intensive.
3D simulation or 3D CAE technologies were already essential in classic development processes for verification and validation, often proving their value by speeding up development and avoiding late-stage changes. 3D simulation or 3D CAE are still indispensable in the context of predictive engineering analytics, becoming a driving force in product development. Software suppliers put great effort into enhancements, by adding new capabilities and increasing performance on modeling, process and solver side. While such tools are generally based on a single common platform, solution bundles are often provided to cater for certain functional or performance aspects, while industry knowledge and best practices are provided to users in application verticals. These improvements should allow 3D simulation or 3D CAE to keep pace with ever shorter product design cycles. [ 22 ] [ 23 ] [ 24 ]
As the closed-loop systems-driven product development approach requires concurrent development of the mechanical system and controls, strong links must exist between 1D simulation, 3D simulation and control algorithm development. Software suppliers achieve this through offering co-simulation capabilities for de:Model in the Loop (MiL), Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) processes. [ 25 ] [ 26 ]
Already when evaluating potential architectures, 1D simulation should be combined with models of control software, as the electronic control unit (ECU) will play a crucial role in achieving and maintaining the right balance between functional performance aspects when the product will operate. During this phase, engineers cascade down the design objectives to precise targets for subsystems and components. They use multi-domain optimization and design trade-off techniques. The controls need to be included in this process. By combining them with the system models in MiL simulations, potential algorithms can be validated and selected.
In practice, MiL involves co-simulation between virtual controls from dedicated controller modeling software and scalable 1D models of the multi-physical system. This provides the right combination of accuracy and calculation speed for investigation of concepts and strategies, as well as controllability assessment. [ 27 ] [ 28 ]
After the conceptual control strategy has been decided, the control software is further developed while constantly taking the overall global system functionality into consideration. The controller modeling software can generate new embedded C-code and integrate it in possible legacy C-code for further testing and refinement.
Using SiL validation on a global, full-system multi-domain model helps anticipate the conversion from floating point to fixed point after the code is integrated in the hardware, and refine gain scheduling when the code action needs to be adjusted to operating conditions.
SiL is a closed-loop simulation process to virtually verify, refine and validate the controller in its operational environment, and includes detailed 1D and/or 3D simulation models. [ 29 ] [ 30 ]
During the final stages of controls development, when the production code is integrated in the ECU hardware, engineers further verify and validate using extensive and automated HiL simulation. The real ECU hardware is combined with a downsized version of the multi-domain global system model, running in real time. This HiL approach allows engineers to complete upfront system and software troubleshooting to limit the total testing and calibration time and cost on the actual product prototype.
During HiL simulation, the engineers verify if regulation, security and failure tests on the final product can happen without risk. They investigate interaction between several ECUs if required. And they make sure that the software is robust and provides quality functionality under every circumstance. When replacing the global system model running in real-time with a more detailed version, engineers can also include pre-calibration in the process. These detailed models are usually available anyway since controls development happens in parallel to global system development. [ 31 ] [ 32 ] [ 33 ]
Evolving from verification and validation to predictive engineering analytics means that the design process has to become more simulation-driven. Physical testing remains a crucial part of that process, both for validation of simulation results as well as for the testing of final prototypes, which would always be required prior to product sign-off. The scale of this task will become even bigger than before, as more conditions and parameters combinations will need to be tested, in a more integrated and complex measurement system that can combine multiple physical aspects, as well as control systems.
Besides, also in other development stages, combining test and simulation in a well aligned process will be essential for successful predictive engineering analytics. [ 34 ]
Modal testing or experimental modal analysis (EMA) was already essential in verification and validation of pure mechanical systems. It is a well-established technology that has been used for many applications, such as structural dynamics , vibro-acoustics , vibration fatigue analysis, and more, often to improve finite element models through correlation analysis and model updating . The context was however very often trouble-shooting.
As part of predictive engineering analytics, modal testing has to evolve, delivering results that increase simulation realism and handle the multi-physical nature of the modern, complex products. Testing has to help to define realistic model parameters, boundary conditions and loads. Besides mechanical parameters, different quantities need to be measured. And testing also needs to be capable to validate multi-body models and 1D multi-physical simulation models. In general a whole new range of testing capabilities (some modal-based, some not) in support of simulation becomes important, and much earlier in the development cycle than before. [ 35 ] [ 36 ] [ 37 ]
As the number of parameters and their mutual interaction explodes in complex products, testing efficiency is crucial, both in terms of instrumentation and definition of critical test cases. A good alignment between test and simulation can greatly reduce the total test effort and boost productivity.
Simulation can help to analyze upfront which locations and parameters can be more effective to measure a certain objective. And it also allows to investigate the coupling between certain parameters, so that the amount of sensors and test conditions can be minimized. [ 38 ]
On top of that, simulation can be used to derive certain parameters that cannot be measured directly. Here again, a close alignment between simulation and testing activities is a must. Especially 1D simulation models can open the door to a large number of new parameters that cannot directly accessed with sensors. [ 39 ]
As complex products are in fact combinations of subsystems which are not necessarily concurrently developed, systems and subsystems development requires ever more often setups that include partially hardware, partially simulation models and partially measurement input. These hybrid modeling techniques will allow realistic real-time evaluation of system behavior very early in the development cycle. Obviously this requires dedicated technologies as a very good alignment between simulation (both 1D and 3D) and physical testing. [ 40 ] [ 41 ] [ 42 ]
Tomorrow's products will live a life after delivery. They will include predictive functionalities based on system models, adapt to their environment, feed information back to design, and more. From this perspective, design and engineering are more than turning an idea into a product. They are an essential part of the digital thread through the entire product value chain , from requirements definition to product in use.
Closing the loop between design and engineering on one hand, and product in use on the other, requires that all steps are tightly integrated in a product lifecycle management software environment. Only this can enable traceability between requirements, functional analysis and performance verification, as well as analytics of use data in support of design. It will allow models to become digital twins of the actual product. They remain in-sync, undergoing the same parameter changes and adapting to the real operational environment. [ 43 ] [ 44 ] [ 45 ] | https://en.wikipedia.org/wiki/Predictive_engineering_analytics |
Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach claims more cost savings over routine or time-based preventive maintenance , because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item. [ 1 ] [ 2 ]
The main appeal of predictive maintenance is to allow convenient scheduling of corrective maintenance , and to prevent unexpected equipment failures. By taking into account measurements of the state of the equipment, maintenance work can be better planned (spare parts, people, etc.) and what would have been "unplanned stops" are transformed to shorter and fewer "planned stops", thus increasing plant availability. Other potential advantages include increased equipment lifetime, increased plant safety, fewer accidents with negative impact on environment, and optimized spare parts handling.
Predictive maintenance differs from preventive maintenance because it does take into account the current condition of equipment (with measurements), instead of average or expected life statistics, to predict when maintenance will be required. Machine Learning approaches are adopted for the forecasting of its future states. [ 3 ]
Some of the main components that are necessary for implementing predictive maintenance are data collection and preprocessing , early fault detection , fault detection, time to failure prediction, and maintenance scheduling and resource optimization. [ 4 ] Predictive maintenance has been considered to be one of the driving forces for improving productivity and one of the ways to achieve " just-in-time " in manufacturing. [ 5 ]
Predictive maintenance evaluates the condition of equipment by performing periodic (offline) or continuous (online) equipment condition monitoring . The ultimate goal of the approach is to perform maintenance at a scheduled point in time when the maintenance activity is most cost-effective and before the equipment loses performance within a threshold. This results in a reduction in unplanned downtime costs because of failure, where costs can be in the hundreds of thousands per day depending on industry. [ 6 ] In energy production, in addition to loss of revenue and component costs, fines can be levied for non-delivery, increasing costs even further. This is in contrast to time- and/or operation count-based maintenance, where a piece of equipment gets maintained whether it needs it or not. Time-based maintenance is labor intensive, ineffective in identifying problems that develop between scheduled inspections, and therefore is not cost-effective.
The "predictive" component of predictive maintenance stems from the goal of predicting the future trend of the equipment's condition. This approach uses principles of statistical process control to determine at what point in the future maintenance activities will be appropriate.
Most predictive inspections are performed while equipment is in service, thereby minimizing disruption of normal system operations. Adoption of predictive maintenance can result in substantial cost savings and higher system reliability . In today's dynamic landscape of service maintenance, prolonged repair processes present a significant challenge for organizations striving to maintain operational excellence. Extended downtime, increased Mean Time to Repair (MTTR), and production losses not only affect profitability but also disrupt service continuity and diminish customer satisfaction. As equipment ages and maintenance requirements intensify, the quest for innovative solutions becomes increasingly urgent.
Reliability-centered maintenance emphasizes the use of predictive maintenance techniques in addition to traditional preventive measures. When properly implemented, it provides companies with a tool for achieving lowest asset net present costs for a given level of performance and risk. [ 7 ]
One goal is to transfer the predictive maintenance data to a computerized maintenance management system so that the equipment condition data is sent to the right equipment object to trigger maintenance planning, work order execution, and reporting. [ 8 ] Unless this is achieved, the predictive maintenance solution is of limited value, at least if the solution is implemented on a medium to large size plant with tens of thousands pieces of equipment. In 2010, the mining company Boliden, implemented a combined Distributed Control System and predictive maintenance solution integrated with the plant computerized maintenance management system on an object to object level, transferring equipment data using protocols like Highway Addressable Remote Transducer Protocol , IEC61850 and OLE for process control .
To evaluate equipment condition, predictive maintenance utilizes nondestructive testing technologies such as infrared , acoustic (partial discharge and airborne ultrasonic), corona detection, vibration analysis , sound level measurements, oil analysis , and other specific online tests. A new approach in this area is to utilize measurements on the actual equipment in combination with measurement of process performance, measured by other devices, to trigger equipment maintenance. This is primarily available in collaborative process automation systems (CPAS). Site measurements are often supported by wireless sensor networks to reduce the wiring cost.
Vibration analysis is most productive on high-speed rotating equipment and can be the most expensive component of a PdM program to get up and running. Vibration analysis, when properly done, allows the user to evaluate the condition of equipment and avoid failures. The latest generation of vibration analyzers comprises more capabilities and automated functions than its predecessors. Many units display the full vibration spectrum of three axes simultaneously, providing a snapshot of what is going on with a particular machine. But despite such capabilities, not even the most sophisticated equipment successfully predicts developing problems unless the operator understands and applies the basics of vibration analysis. [ 9 ]
In certain situations, strong background noise interferences from several competing sources may mask the signal of interest and hinder the industrial applicability of vibration sensors . Consequently, motor current signature analysis (MCSA) is a non-intrusive alternative to vibration measurement which has the potential to monitor faults from both electrical and mechanical systems.
Remote visual inspection is the first non-destructive testing. It provides a cost-efficient primary assessment. Essential information and defaults can be deduced from the external appearance of the piece, such as folds, breaks, cracks, and corrosion. The remote visual inspection has to be carried out in good conditions with a sufficient lighting (350 LUX at least). When the part of the piece to be controlled is not directly accessible, an instrument made of mirrors and lenses called endoscope is used. Hidden defects with external irregularities may indicate a more serious defect inside. [ citation needed ]
Acoustical analysis can be done on a sonic or ultrasonic level. New ultrasonic techniques for condition monitoring make it possible to "hear" friction and stress in rotating machinery, which can predict deterioration earlier than conventional techniques. [ 10 ] Ultrasonic technology is sensitive to high-frequency sounds that are inaudible to the human ear and distinguishes them from lower-frequency sounds and mechanical vibration. Machine friction and stress waves produce distinctive sounds in the upper ultrasonic range.
Changes in these friction and stress waves can suggest deteriorating conditions much earlier than technologies such as vibration or oil analysis. With proper ultrasonic measurement and analysis, it's possible to differentiate normal wear from abnormal wear, physical damage, imbalance conditions, and lubrication problems based on a direct relationship between asset and operating conditions.
Sonic monitoring equipment is less expensive, but it also has fewer uses than ultrasonic technologies. Sonic technology is useful only on mechanical equipment, while ultrasonic equipment can detect electrical problems and is more flexible and reliable in detecting mechanical problems.
Infrared monitoring and analysis has the widest range of application (from high- to low-speed equipment), and it can be effective for spotting both mechanical and electrical failures; some consider it to currently be the most cost-effective technology.
Oil analysis is a long-term program that, where relevant, can eventually be more predictive than any of the other technologies. It can take years for a plant's oil program to reach this level of sophistication and effectiveness.
Analytical techniques performed on oil samples can be classified in two categories: used oil analysis and wear particle analysis. Used oil analysis determines the condition of the lubricant itself, determines the quality of the lubricant, and checks its suitability for continued use. Wear particle analysis determines the mechanical condition of machine components that are lubricated. Through wear particle analysis, you can identify the composition of the solid material present and evaluate particle type, size, concentration, distribution, and morphology. [ 11 ]
The use of Model Based Condition Monitoring for predictive maintenance programs is becoming increasingly popular over time. This method involves spectral analysis on the motor's current and voltage signals and then compares the measured parameters to a known and learned model of the motor to diagnose various electrical and mechanical anomalies. This process of "model based" condition monitoring was originally designed and used on NASA's space shuttle to monitor and detect developing faults in the space shuttle's main engine. [ 12 ] It allows for the automation of data collection and analysis tasks, providing round the clock condition monitoring and warnings about faults as they develop. Other predictive maintenance methods are related to smart testing strategies. [ 13 ] | https://en.wikipedia.org/wiki/Predictive_maintenance |
Predictive Microbiology is the area of food microbiology where controlling factors in foods and responses of pathogenic and spoilage microorganisms are quantified and modelled by mathematical equations [ 1 ]
It is based on the thesis that microorganisms' growth and environment are reproducible, and can be modeled. [ 2 ] [ 3 ] Temperature , pH and water activity impact bacterial behavior. These factors can be changed to control food spoilage . [ 4 ]
Models can be used to predict pathogen growth in foods. Models are developed in several steps including design, development, validation, and production of an interface to display results. [ 4 ] Models can be classified attending to their objective in primary models (describing bacterial growth), secondary models (describing factors affecting bacterial growth) or tertiary models (computer software programs). [ 5 ]
Predictive biology is an emerging interdisciplinary field that integrates systems biology, computational modeling, and large-scale data analysis to forecast biological behaviors and outcomes. Drawing inspiration from fields such as meteorology, predictive biology aims to transition biology from a primarily descriptive science to one that can anticipate and manipulate biological systems with accuracy. The approach holds potential across medicine, biotechnology, and environmental sciences.
Predictive biology seeks to understand and forecast the behavior of complex biological systems by integrating experimental data with mathematical and computational models. This discipline is grounded in systems biology, which views biological entities as dynamic networks rather than isolated parts. As a result, predictive biology aims not only to describe existing biological phenomena but also to anticipate future states or responses under varying conditions.
Biology, like meteorology, can advance through structured methodologies such as iterative modeling and interdisciplinary collaboration. Lessons from forecasting weather have shown that improvements in data quality, model accuracy, and communication networks can drastically enhance predictive capacity, which is now being applied to biological systems to improve long-term forecasts and interventions.
The transition from descriptive to predictive science requires a foundational shift in approach. By focusing on the interactions between genes, proteins, and cellular mechanisms, researchers can model whole biological systems. This systems-based perspective supports the development of more accurate simulations and theoretical frameworks, allowing scientists to better anticipate biological outcomes.
In microbial research, predictive models are being used to understand complex behaviors such as antibiotic resistance and gene expression variability. These models help identify patterns in microbial responses and support efforts to control or harness microbial systems in clinical and industrial contexts. The integration of experimental data with predictive modeling provides new avenues for intervention and bioengineering.
Predictive biology represents a significant step toward understanding and anticipating the behavior of living systems. By combining systems thinking, computational modeling, and data-driven analysis, researchers are beginning to forecast biological outcomes with greater precision. As the field continues to evolve, its integration into areas such as healthcare, biotechnology, and environmental science holds the promise of more informed decisions and targeted interventions, moving biology from observation to prediction. | https://en.wikipedia.org/wiki/Predictive_microbiology |
In computer science , a predictive state representation ( PSR ) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. [ 1 ] A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states. [ 2 ]
Consider a dynamic system based on a discrete set A {\displaystyle {\mathcal {A}}} of actions and a discrete set O {\displaystyle {\mathcal {O}}} of observations. [ 3 ] A history h {\displaystyle h} is a sequence a 1 o 1 … a ℓ o ℓ {\displaystyle a_{1}o_{1}\dots a_{\ell }o_{\ell }} where a 1 , … , a ℓ {\displaystyle a_{1},\dots ,a_{\ell }} are the actions taken by the agent in that order and o 1 , … , o ℓ {\displaystyle o_{1},\dots ,o_{\ell }} are the observations made up by the agent. Let us write P ( a 1 o 1 … a ℓ o ℓ ) {\displaystyle P(a_{1}o_{1}\dots a_{\ell }o_{\ell })} to be the conditional probability of observing o 1 , … , o ℓ {\displaystyle o_{1},\dots ,o_{\ell }} given that the actions taken are a 1 , … , a ℓ {\displaystyle a_{1},\dots ,a_{\ell }} .
We now want to characterize a given hidden state reached after some history h {\displaystyle h} . To do that, we introduce the notion of test. A test t {\displaystyle t} is of the same type that a history: it is a sequence of action-observation pairs. The idea is now to consider a set of tests { t 1 , … , t n } {\displaystyle \{t_{1},\dots ,t_{n}\}} to full characterize a hidden state. To do that, we first define the probability of a test t {\displaystyle t} conditional on a history h {\displaystyle h} : P ( t ∣ h ) := P ( h t ) P ( h ) {\displaystyle P(t\mid h):={\frac {P(ht)}{P(h)}}} .
We now define the prediction vector p ( h ) = [ P ( t 1 ∣ h ) , … , P ( t n ∣ h ) ] {\displaystyle p(h)=[P(t_{1}\mid h),\dots ,P(t_{n}\mid h)]} . We say that p ( h ) {\displaystyle p(h)} is a predictive state representation (PSR) if and only if it forms a sufficient statistic for the system. In other words, p ( h ) {\displaystyle p(h)} is a predictive state representation (PSR) if and only if for all possible tests t {\displaystyle t} , there exists a function f t {\displaystyle f_{t}} such that for all histories h {\displaystyle h} , P ( t ∣ h ) = f t ( p ( h ) ) {\displaystyle P(t\mid h)=f_{t}(p(h))} .
The functions f t {\displaystyle f_{t}} are called projection functions . We say that the PSR is linear when the function f t {\displaystyle f_{t}} is linear for all possible tests t {\displaystyle t} . The main theorem proved in [ 3 ] is stated as follows.
Theorem. Consider a finite POMDP with k {\displaystyle k} states. Then there exists a linear PSR with a number of tests n {\displaystyle n} smaller that k {\displaystyle k} .
This artificial intelligence -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Predictive_state_representation |
Predictive text is an input technology used where one key or button represents many letters, such as on the physical numeric keypads of mobile phones and in accessibility technologies. Each key press results in a prediction rather than repeatedly sequencing through the same group of "letters" it represents, in the same, invariable order. Predictive text could allow for an entire word to be input by single keypress. Predictive text makes efficient use of fewer device keys to input writing into a text message , an e-mail , an address book , a calendar , and the like.
The most widely used, general, predictive text systems are T9 , iTap , eZiText , and LetterWise /WordWise. There are many ways to build a device that predicts text, but all predictive text systems have initial linguistic settings that offer predictions that are re-prioritized to adapt to each user. This learning adapts, by way of the device memory, to a user's disambiguating feedback that results in corrective key presses, such as pressing a "next" key to get to the intention. Most predictive text systems have a user database to facilitate this process.
Theoretically the number of keystrokes required per desired character in the finished writing is, on average, comparable to using a keyboard . This is approximately true providing that all words used are in its database, punctuation is ignored, and no input mistakes are made typing or spelling. [ 1 ] The theoretical keystrokes per character, KSPC, of a keyboard is KSPC=1.00, and of multi-tap is KSPC=2.03. Eatoni's LetterWise is a predictive multi-tap hybrid, which when operating on a standard telephone keypad achieves KSPC=1.15 for English.
The choice of which predictive text system is the best to use involves matching the user's preferred interface style , the user's level of learned ability to operate predictive text software, and the user's efficiency goal. There are various levels of risk in predictive text systems, versus multi-tap systems, because the predicted text that is automatically written that provide the speed and mechanical efficiency benefit, could, if the user is not careful to review, result in transmitting misinformation. Predictive text systems take time to learn to use well, and so generally, a device's system has user options to set up the choice of multi-tap or of any one of several schools of predictive text methods.
Short message service (SMS) permits a mobile phone user to send text messages (also called messages, SMSes, texts, and txts) as a short message. The most common system of SMS text input is referred to as " multi-tap ". Using multi-tap, a key is pressed multiple times to access the list of letters on that key. For instance, pressing the "2" key once displays an "a", twice displays a "b" and three times displays a "c". To enter two successive letters that are on the same key, the user must either pause or hit a "next" button. A user can type by pressing an alphanumeric keypad without looking at the electronic equipment display. Thus, multi-tap is easy to understand, and can be used without any visual feedback. However, multi-tap is not very efficient, requiring potentially many keystrokes to enter a single letter.
In ideal predictive text entry, all words used are in the dictionary, punctuation is ignored, no spelling mistakes are made, and no typing mistakes are made. The ideal dictionary would include all slang, proper nouns , abbreviations , URLs , foreign-language words and other user-unique words. This ideal circumstance gives predictive text software the reduction in the number of key strokes a user is required to enter a word. The user presses the number corresponding to each letter and, as long as the word exists in the predictive text dictionary, or is correctly disambiguated by non-dictionary systems, it will appear. For instance, pressing "4663" will typically be interpreted as the word good , provided that a linguistic database in English is currently in use, though alternatives such as home , hood and hoof are also valid interpretations of the sequence of key strokes.
The most widely used systems of predictive text are Tegic's T9 , Motorola's iTap , and the Eatoni Ergonomics ' LetterWise and WordWise. T9 and iTap use dictionaries, but Eatoni Ergonomics' products uses a disambiguation process, a set of statistical rules to recreate words from keystroke sequences. All predictive text systems require a linguistic database for every supported input language.
Traditional disambiguation works by referencing a dictionary of commonly used words, though Eatoni offers a dictionaryless disambiguation system.
In dictionary-based systems, as the user presses the number buttons, an algorithm searches the dictionary for a list of possible words that match the keypress combination, and offers up the most probable choice. The user can then confirm the selection and move on, or use a key to cycle through the possible combinations.
A non-dictionary system constructs words and other sequences of letters from the statistics of word parts. To attempt predictions of the intended result of keystrokes not yet entered, disambiguation may be combined with a word completion facility.
Either system (disambiguation or predictive) may include a user database, which can be further classified as a "learning" system when words or phrases are entered into the user database without direct user intervention. The user database is for storing words or phrases which are not well disambiguated by the pre-supplied database. Some disambiguation systems further attempt to correct spelling, format text or perform other automatic rewrites, with the risky effect of either enhancing or frustrating user efforts to enter text.
The predictive text and autocomplete technology was invented out of necessities by Chinese scientists and linguists in the 1950s to solve the input inefficiency of the Chinese typewriter , [ 2 ] as the typing process involved finding and selecting thousands of logographic characters on a tray, [ 3 ] drastically slowing down the word processing speed. [ 4 ] [ 5 ]
The actuating keys of the Chinese typewriter created by Lin Yutang in the 1940s included suggestions for the characters following the one selected. In 1951, the Chinese typesetter Zhang Jiying arranged Chinese characters in associative clusters, a precursor of modern predictive text entry, and broke speed records by doing so. [ 6 ] Predictive entry of text from a telephone keypad has been known at least since the 1970s (Smith and Goodwin, 1971). Predictive text was mainly used to look up names in directories over the phone, until mobile phone text messaging came into widespread use.
On a typical phone keypad, if users wished to type the in a "multi-tap" keypad entry system, they would need to:
Meanwhile, in a phone with predictive text, they need only:
The system updates the display as each keypress is entered, to show the most probable entry. In this example, prediction reduced the number of button presses from five to three. The effect is even greater with longer words and those composed of letters later in each key's sequence.
A dictionary-based predictive system is based on hope that the desired word is in the dictionary. That hope may be misplaced if the word differs in any way from common usage—in particular, if the word is not spelled or typed correctly, is slang, or is a proper noun . In these cases, some other mechanism must be used to enter the word. Furthermore, the simple dictionary approach fails with agglutinative languages , where a single word does not necessarily represent a single semantic entity.
Predictive text is developed and marketed in a variety of competing products, such as Nuance Communications 's T9 . Other products include Motorola 's iTap ; Eatoni Ergonomic 's LetterWise (character, rather than word-based prediction); WordWise (word-based prediction without a dictionary); EQ3 (a QWERTY -like layout compatible with regular telephone keypads); Prevalent Devices 's Phraze-It ; Xrgomics ' TenGO (a six-key reduced QWERTY keyboard system); Adaptxt (considers language, context, grammar and semantics); Lightkey (a predictive typing software for Windows); Clevertexting (statistical nature of the language, dictionaryless, dynamic key allocation); and Oizea Type (temporal ambiguity); Intelab's Tauto; WordLogic's Intelligent Input Platform™ (patented, layer-based advanced text prediction, includes multi-language dictionary, spell-check, built-in Web search); Google's Gboard .
Words produced by the same combination of keypresses have been called "textonyms"; [ 7 ] also "txtonyms"; [ 8 ] or "T9onyms" (pronounced "tynonyms" / ˈ t aɪ n ə n ɪ m z / [ 7 ] ), though they are not specific to T9. Selecting the wrong textonym can occur with no misspelling or typo, if the wrong textonym is selected by default or user error. As mentioned above, the key sequence 4663 on a telephone keypad, provided with a linguistic database in English, will generally be disambiguated as the word good . However, the same key sequence also corresponds to other words, such as home , gone , hoof , hood and so on. For example, "Are you home?" could be rendered as "Are you good?" if the user neglects to alter the default 4663 word. This can lead to misunderstandings; for example sequence 735328 might correspond to either select or its antonym reject . A 2010 brawl that led to manslaughter was sparked by a textonym error. [ 9 ] Predictive text choosing a default different from that which the user expects has similarities with the Cupertino effect , by which spell-check software changes a spelling to that of an unintended word.
Textonyms were used as Millennial slang; for example, the use of the word book to mean cool , since book was the default in predictive text systems that assumed it was more frequent than cool . [ 10 ] This is related to cacography .
Textonyms are not the only issue limiting the effectiveness of predictive text implementations. Another significant problem are words for which the disambiguation produces a single, incorrect response. The system may, for example, respond with Blairf upon input of 252473, when the intended word was Blaise or Claire , both of which correspond to the keystroke sequence, but are not, in this example, found by the predictive text system. When typos or misspellings occur, they are very unlikely to be recognized correctly by a disambiguation system, though error correction mechanisms may mitigate that effect. | https://en.wikipedia.org/wiki/Predictive_text |
Predispositioning theory , in the field of decision theory and systems theory , is a theory focusing on the stages between a complete order and a complete disorder.
Predispositioning theory was founded by Aron Katsenelinboigen (1927–2005), a professor in the Wharton School who dealt with indeterministic systems such as chess , business , economics , and other fields of knowledge and also made an essential step forward in elaboration of styles and methods of decision-making .
Predispositioning theory is focused on the intermediate stage between a complete order and a complete disorder. According to Katsenelinboigen, the system develops gradually, going through several stages, starting with incomplete and inconsistent linkages between its elements and ending with complete and consistent ones.
" Mess . The zero phase can be called a mess because it contains no linkages between the system's elements. Such a definition of mess as ‘a disorderly, un-tidy, or dirty state of things’ we find in Webster's New World Dictionary. (...) Chaos . Mess should not be confused with the next phase, chaos, as this term is understood today. Arguably, chaos is the first phase of indeterminism that displays sufficient order to talk of the general problem of system development.
The chaos phase is characterized by some ordering of accumulated statistical data and the emergence of the basic rules of interactions of inputs and outputs (not counting boundary conditions). Even such a seemingly limited ordering makes it possible to fix systemic regularities of the sort shown by Feigenbaum numbers and strange attractors . (...) Different types of orderings in the chaos phase may be brought together under the notion of directing, for they point to a possible general direction of system development and even its extreme states.
But even if a general path is known, enormous difficulties remain in linking algorithmically the present state with the final one and in operationalizing the algorithms. These objectives are realized in the next two large phases that I call predispositioning and programming. (...) Programming . When linkages between states are established through reactive procedures, either by table functions or analytically, it is often assumed that each state is represented only by essentials. For instance, the production function in economics ties together inputs and outputs in physical terms. When a system is represented as an equilibrium or an optimization model, the original and conjugated parameters are stated explicitly; in economics, they are products (resources) and prices, respectively.9 Deterministic economic models have been extensively formalized; they assume full knowledge of inputs, outputs, and existing technologies.
(...) Predispositioning (...) exhibits less complete linkages between system's elements than programming but more complete than chaos." [ 1 ] : 19–20
Methods like programming and randomness are well-known and developed while the methodology for the intermediate stages lying between complete chaos and complete order as well as their philosophical conceptualization have never been discussed explicitly and no methods of their measurements were elaborated.
According to Katsenelinboigen, operative sub-methods of dealing with the system are programming, predispositioning, and randomness. They correspond to three stages of systems development.
Programming is a formation of complete and consistent linkages between all the stages of the systems' development.
Predispositioning is a formation of semi-efficient linkages between the stages of the system's development. In other words, predispositioning is a method responsible for creation of a predisposition.
Randomness is a formation of inconsistent linkages between the stages of the system's development. In this context, for instance, Darwinism emphasizes the exclusive role of chance occurrences in the system's development since it gives top priority to randomness as a method. Conversely, creationism states that the system develops in a comprehensive fashion, i.e. that programming is the only method involved in the development of the system. As Aron Katsenelinboigen notices, both schools neglect the fact that the process of the system's development includes a variety of methods which govern different stages, depending on the systems’ goals and conditions.
Unfortunately, predispositioning as a method as well as a predisposition as an intermediate stage have never been discussed by scholars, though there were some interesting intuitive attempts to deal with the formation of a predisposition. The game of chess, at this point, was one of the most productive fields in the study of predispositioning as a method. Owing to chess’ focus on the positional style, it elaborated a host of innovative strategies and tactics that Katsenelinboigen analyzed and systematized and made them a basis for his theory.
To sum up, the main focus of predispositioning theory is on the intermediate stage of systems development, the stage that Katsenelinboigen proposed to call a "predisposition". This stage is distinguished by semi-complete and semi-consistent linkages between its elements.
The most vital question when dealing with semi-complete and semi-consistent stages of the system is the question of its evaluation. To this end, Katsenelinboigen elaborated his structure of values, using the game of chess as a model.
According to Katsenelinboigen's predispositioning theory, in the chess game pieces are evaluated from two basic points of view – their weight in a given position on the chessboard and their weight independent to any particular position.
Based on the degree of conditionality, the values are:
According to Katsenelinboigen, game pieces in chess are evaluated from two basic points of view: their weight with regard to a certain situation on the chessboard and their weight without regard to any particular situation, only to the position of the pieces. The latter are defined by Katsenelinboigen as semi-unconditional values, formed by the sole condition of the rules of piece interaction. The semiunconditional values of the pieces (such as queen 9, rook 5, bishop 3, knight 3, and pawn 1) appear as a result of the rules of interaction of a piece with the opponent's king. All other conditions, such as starting conditions, final goal, and a program that links the initial condition to the final state, are not taken into account. The degree of conditionality is increased by applying preconditions, and the presence of all four preconditions fully forms conditional values.
Katsenelinboigen outlines two extreme cases of the spectrum of values—fully conditional and fully unconditional—and says that, in actuality, they are ineffectual in evaluating the material and so are sometimes replaced by semiconditional or semiunconditional valuations, which are distinguished by their differing degrees of conditionality. He defines fully conditional values as those based on complete and consistent linkages among all four preconditions." [ 2 ] : 144–145
The conditional values are formed by the four basic conditions:
The degree of unconditionality is predicated by the necessity to evaluate things under uncertainty (when the future is unknown) and conditions cannot be specified.
Applying his concept of values to social systems, Katsenelinboigen shows how the degree of unconditionality forms morality and law. According to him, the moral values represented in the Torah as the Ten Commandments are analogous to semi-unconditional values in a chess game, for they are based exclusively on the rules of interactions.
"The difference between these two approaches is clearly manifested in the various translations of the Torah. For instance, The Holy Scriptures (1955), a new translation based on the masoretic text (a vast body of the textual criticism of the Hebrew Bible), translates the commandment as ‘Thou shalt not commit murder.’ In The Holy Bible, commonly known as the authorized ( King James ) version (The Gideons International, 1983), this commandment is translated as ‘ Thou shalt not kill .’
(...) The difference between unconditional and semi-unconditional evaluations will become more prominent if we use the same example of ‘Thou shalt not kill and ‘Thou shalt not murder’ to illustrate the conduct of man in accordance with his precepts.
In an extreme case, one who follows ‘Thou shalt not kill’ will allow himself to be killed before he kills another. These views are held by one of the Hindu sects in Sri Lanka (the former Ceylon). To the best of my knowledge, the former prime minister of Ceylon, Solomon Bandaranaike (1899-1959), belonged to this sect. He did not allow himself to kill an attacker and was murdered. As he lay bleeding to death, he did crawl over to the murderer and knock the pistol from his hand before it could be used against his wife, Sirimavo Bandaranaike. She later became the prime minister of Ceylon-Sri Lanka." [ 1 ] : 135–36
But how does one ascribe weights to certain parameters, establishes the degree of conditionality, etc.? How does the evaluative process go in indeterministic systems?
Katsenelinboigen states that the evaluative category for indeterministic systems is based on subjectivity.
"This pioneering approach to the evaluative process is the subject of Katsenelinboigen’s work on indeterministic systems. The roots of one’s subjective evaluation lie in the fact that the executor cannot be separated from the evaluator, who evaluates the system in accordance with his or her own particular ability to develop it. This can be observed in chess, in which the same position is evaluated differently by different chess players, or in literature with regard to hermeneutics." [ 2 ] : 36
Katsenelinboigen writes:
Katsenelinboigen clearly explains why subjectivity of the managerial decision is inevitable:
To sum up, subjectivity becomes an important factor in evaluating a predisposition. The roots of one's subjective evaluation lie in the fact that the executor cannot be separated from the evaluator who evaluates the system in accordance with his own particular ability to develop it.
The structure of values plays an essential part in calculus of predisposition.
Calculus of predispositions , a basic part of predispositioning theory, belongs to the indeterministic procedures.
"The key component of any indeterministic procedure is the evaluation of a position. Since it is impossible to devise a deterministic chain linking the inter-mediate state with the outcome of the game, the most complex component of any indeterministic method is assessing these intermediate stages. It is precisely the function of predispositions to assess the impact of an intermediate state upon the future course of development." [ 1 ] : 33 According to Katsenelinboigen, calculus of predispositions is another method of computing probability. Both methods may lead to the same results and, thus, can be interchangeable. However, it is not always possible to interchange them since computing via frequencies requires availability of statistics, possibility to gather the data as well as having the knowledge of the extent to which one can interlink the system's constituent elements. Also, no statistics can be obtained on unique events and, naturally, in such cases the calculus of predispositions becomes the only option.
The procedure of calculating predispositions is linked to two steps – dissection of the system on its constituent elements and integration of the analyzed parts in a new whole. According to Katsenelinboigen, the system is structured by two basic types of parameters – material and positional. The material parameters constitute the skeleton of the system. Relationships between them form positional parameters. The calculus of predispositions primarily deals with
"In order to quantify the evaluation of a position we need new techniques, which I have grouped under the heading of calculus of predispositions. This calculus is based on a weight function, which represents a variation on the well-known criterion of optimality for local extremum. This criterion incorporates material parameters and their conditional valuations.
The following key elements distinguish the modified weight function from the criterion of optimality:
To conclude, there are some basic differences between frequency-based and predispositions-based methods of computing probability .
According to Katsenelinboigen, the two methods of computing probability may complement each other if, for instance, they are applied to a multilevel system with an increasing complexity of its composition at higher levels. | https://en.wikipedia.org/wiki/Predispositioning_theory |
Predistortion is a technique used to improve the linearity of radio transmitter amplifiers .
Radio transmitter amplifiers in most telecommunications systems are required to be "linear", in that they must accurately reproduce the signal present at their input. An amplifier that compresses its input or has a non-linear input/output relationship causes the output signal to splatter onto adjacent radio frequencies. This causes interference on other radio channels.
There are many different ways of specifying the linearity of a power amplifier, including P1dB, inter-modulation distortion (IMD), AM-to-PM, spectral regrowth and noise power ratio (NPR). For a truly linear system, these measures are in a sense all equivalent. That is, a power amplifier with low inter-modulation distortion will also have low spectral regrowth and low AM-to-PM distortion. Likewise there are two equivalent ways of conceptualizing how predistortion amplifiers work: correcting gain and phase distortions, or cancelling inter-modulation products. Usually one of the two conceptualizations is preferred when designing predistortion circuitry; however the end result is generally the same. A predistorter designed to correct gain and phase non-linearities will also improve IMD, while one which targets inter-modulation products will also reduce gain and phase perturbations.
When combined with the target amplifier, the linearizer produces an overall system that is more linear and reduces the amplifier's distortion. In essence, "inverse distortion" is introduced into the input of the amplifier, thereby cancelling any non-linearity the amplifier might have.
Predistortion is a cost-saving and power efficiency technique. Radio power amplifiers tend to become more non-linear as their output power increases towards their maximum rated output. Predistortion is a way to get more usable power from the amplifier, without having to build a larger, less efficient and more expensive amplifier. Another important consideration in design of RF power amplifiers is the memory effect or amplifier nonlinear dynamics. [ 1 ]
Predistortion can be implemented in an analog as well as digital manner known as digital pre-distortion . [ 2 ] [ 3 ]
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Predistortion |
Predix , known as Predix Platform is an industrial IoT software platform from GE Digital . It provides edge-to-cloud data connectivity, processing, analytics, and services to support industrial applications. The Platform has both edge and cloud components. Predix Cloud is hosted on AWS .
Predix Platform collects and transfers OT and IT data to the cloud by direct connector software or Predix Edge – an on-premises software product that also supports local analytics and applications processing. Predix Edge deployments can be managed at the local level and/or centrally from Predix Cloud.
In addition to data ingestion, processing and storage, Predix Platform provides a framework for operationalizing streaming and batch analytics processing.
Predix Essentials is a packaged and pre-configured version of the Predix Platform intended to immediately support GE Digital applications and typical IIoT use cases such as condition-based monitoring.
In November 2016, Forrester Research said GE Digital's Predix was one of eleven significant IoT packages. [ 1 ] | https://en.wikipedia.org/wiki/Predix_(software) |
A predominance diagram purports to show the conditions of concentration and pH where a chemical species has the highest concentration in solutions in which there are multiple acid-base equilibria . [ 1 ] The lines on a predominance diagram indicate where adjacent species have the same concentration. Either side of such a line one species or the other predominates, that is, has higher concentration relative to the other species.
To illustrate a predominance diagram, part of the one for chromate is shown at the right. pCr stands for minus the logarithm of the chromium concentration and pH stands for minus the logarithm of the hydrogen ion concentration. There are two independent equilibria, with equilibrium constants defined as follows.
A third equilibrium constant can be derived from K 1 and K D .
The species H 2 CrO 4 and HCr 2 O − 7 are only formed at very low pH so they do not appear on this diagram. Published values for log K 1 and log K D are 5.89 and 2.05, respectively. [ 2 ] Using these values and the equality conditions, the concentrations of the three species, chromate CrO 2− 4 , hydrogen chromate HCrO − 4 and dichromate Cr 2 O 2− 7 can be calculated, for various values of pH, by means of the equilibrium expressions. The chromium concentration is calculated as the sum of the species' concentrations in terms of chromium content.
The three species all have concentrations equal to 1 / K D at pH = p K 1 , for which [Cr] = 4 / K D . [ 3 ] The three lines on this diagram meet at that point.
The predominance diagram is interpreted as follows. The chromate ion is the predominant species in the region to the right of the green and blue lines. Above pH ~6.75 it is always the predominant species. At pH < 5.89 (pH < p K 1 ) the hydrogen chromate ion is predominant in dilute solution but the dichromate ion is predominant in more concentrated solutions.
Predominance diagrams can become very complicated when many polymeric species can be formed as, for example, with vanadate , [ 4 ] molybdate [ 1 ] and tungstate . [ 1 ] Another complication is that many of the higher polymers are formed extremely slowly, such that equilibrium may not be attained even in months, leading to possible errors in the equilibrium constants and the predominance diagram. | https://en.wikipedia.org/wiki/Predominance_diagram |
In mathematics , the predual of an object D is an object P whose dual space is D .
For example, the predual of the space of bounded operators is the space of trace class operators, and the predual of the space L ∞ ( R ) of essentially bounded functions on R is the Banach space L 1 ( R ) of integrable functions.
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Predual |
A prefabricated building , informally a prefab , is a building that is manufactured and constructed using prefabrication . It consists of factory-made components or units that are transported and assembled on-site to form the complete building. Various materials were combined to create a part of the installation process. [ 1 ]
Buildings have been built in one place and reassembled in another throughout history. This was especially true for mobile activities, or for new settlements. Elmina Castle , the first slave fort in West Africa , was also the first European prefabricated building in Sub-saharan Africa . [ 3 ] : 93 In North America , in 1624 one of the first buildings at Cape Ann was probably partially prefabricated, and was rapidly disassembled and moved at least once. John Rollo described in 1801 earlier use of portable hospital buildings in the West Indies . [ 4 ] Possibly the first advertised prefab house was the "Manning cottage". A London carpenter, Henry Manning, constructed a house that was built in components, then shipped and assembled by British emigrants. This was published at the time (advertisement, South Australian Record, 1837) and a few still stand in Australia. [ 5 ] One such is the Friends Meeting House, Adelaide . [ 6 ] [ 7 ] The peak year for the importation of portable buildings to Australia was 1853, when several hundred arrived. These have been identified as coming from Liverpool , Boston and Singapore (with Chinese instructions for re-assembly). [ 8 ] In Barbados the Chattel house was a form of prefabricated building which was developed by emancipated slaves who had limited rights to build upon land they did not own. As the buildings were moveable they were legally regarded as chattels . [ 9 ]
In 1855 during the Crimean War , after Florence Nightingale wrote a letter to The Times , Isambard Kingdom Brunel was commissioned to design a prefabricated modular hospital. In five months he designed the Renkioi Hospital : a 1,000 patient hospital, with innovations in sanitation, ventilation and a flushing toilet. [ 10 ] Fabricator William Eassie constructed the required 16 units in Gloucester Docks , shipped directly to the Dardanelles . Only used from March 1856 to September 1857, it reduced the death rate from 42% to 3.5%.
The world's first prefabricated, pre-cast panelled apartment blocks were pioneered in Liverpool . A process was invented by city engineer John Alexander Brodie , whose inventive genius also had him inventing the football goal net. The tram stables at Walton in Liverpool followed in 1906. The idea was not extensively adopted in Britain, however was widely adopted elsewhere, particularly in Eastern Europe.
Prefabricated homes were produced during the Gold Rush in the United States, when kits were produced to enable Californian prospectors to quickly construct accommodation. Homes were available in kit form by mail order in the United States in 1908. [ 11 ]
Prefabricated housing was popular during the Second World War due to the need for mass accommodation for military personnel. The United States used Quonset huts as military buildings, and in the United Kingdom prefabricated buildings used included Nissen huts and Bellman Hangars . 'Prefabs' were built after the war as a means of quickly and cheaply providing quality housing as a replacement for the housing destroyed during the Blitz . The proliferation of prefabricated housing across the country was a result of the Burt Committee and the Housing (Temporary Accommodation) Act 1944 . Under the Ministry of Works Emergency Factory Made housing programme, a specification was drawn up and bid on by various private construction and manufacturing companies. After approval by the MoW, companies could bid on Council led development schemes, resulting in whole estates of prefabs constructed to provide accommodation for those made homeless by the War and ongoing slum clearance . [ 12 ] Almost 160,000 had been built in the UK by 1948 at a cost of close to £216 million. The largest single prefab estate in Britain [ 13 ] was at Belle Vale (South Liverpool), where more than 1,100 were built after World War 2. The estate was demolished in the 1960s amid much controversy as the prefabs were very popular with residents at the time.
Prefabs were aimed at families, and typically had an entrance hall, two bedrooms (parents and children), a bathroom (a room with a bath) — which was a novel innovation for many Britons at that time, a separate toilet, a living room and an equipped (not fitted in the modern sense) kitchen. Construction materials included steel, aluminium, timber or asbestos cement , depending on the type of dwelling. The aluminium Type B2 prefab was produced as four pre-assembled sections which could be transported by lorry anywhere in the country. [ 14 ]
The Universal House (pictured left & lounge diner right) was given to the Chiltern Open Air Museum after 40 years temporary use. The Mark 3 was manufactured by the Universal Housing Company Ltd, Rickmansworth.
The United States used prefabricated housing for troops during the war and for GIs returning home. Prefab classrooms were popular with UK schools increasing their rolls during the baby boom of the 1950s and 1960s.
Many buildings were designed with a five-ten year life span, but have far exceeded this, with a number surviving today. In 2002, for example, the city of Bristol still had residents living in 700 examples. [ 15 ] Many UK councils have been in the process of demolishing the last surviving examples of Second World War prefabs in order to comply with the British government's Decent Homes Standard , which came into effect in 2010. There has, however, been a recent revival in prefabricated methods of construction in order to compensate for the United Kingdom's current housing shortage. [ citation needed ]
Architects are incorporating modern designs into the prefabricated houses of today. Prefab housing should no longer be compared to a mobile home in terms of appearance, but to that of a complex modernist design. [ 16 ] There has also been an increase in the use of "green" materials in the construction of these prefab houses. Consumers can easily select between different environmentally friendly finishes and wall systems. Since these homes are built in parts, it is easy for a home owner to add additional rooms or even solar panels to the roofs. Many prefab houses can be customized to the client's specific location and climate, making prefab homes much more flexible and modern than before.
There is a zeitgeist or trend in architectural circles and the spirit of the age favors the small carbon footprint of "prefab".
The process of building pre-fabricated buildings has become so efficient in China that a builder in Changsha built a ten- storey building in 28 hours and 45 minutes. [ 17 ] [ 18 ]
Prefabricated construction generates less carbon footprint, improves energy use and efficiency, and produces less waste, making it more sustainable and environmentally friendly, and compliant with sustainable design standards. [ 19 ] [ 20 ]
The modular architecture allows, thanks to 3D modeling, the design and construction of the modular structure outside the site where it will be installed. [ 21 ] This offers several advantages such as more sustainable design, greater cost and time savings and standardization of design. [ 22 ] This is especially important for large-scale construction projects. [ 23 ]
Many eastern European countries had suffered physical damage during World War II and their economies were in a very poor state. There was a need to reconstruct cities which had been severely damaged due to the war. For example, Warsaw had been practically razed to the ground under the planned destruction of Warsaw by German forces after the 1944 Warsaw Uprising . The centre of Dresden , Germany, had been totally destroyed by the 1945 Allied bombardment. Stalingrad had been largely destroyed and only a small number of structures were left standing.
Prefabricated buildings served as an inexpensive and quick way to alleviate the massive housing shortages associated with the wartime destruction and large-scale urbanization and rural flight .
Prefabrication for commercial uses has a long history - a major expansion was made in the Second World War when ARCON (short for Architecture Consultants) developed a system using steel components that could be rapidly erected and then clad with a variety of materials to suit local conditions, availability, and cost. [ 24 ]
McDonald's uses prefabricated structures for their buildings, and set a record of constructing a building and opening for business within 13 hours (on pre-prepared ground works). [ 25 ]
In the UK, the major supermarkets have each developed a modular unit system to shop building, based on the systems developed by German cost retailer Aldi and the Danish supermarket chain Netto . [ 26 ]
In structural engineering , a pre-engineered building ( PEB ) is designed by a PEB supplier or PEB manufacturer with a single design to be fabricated using various materials and methods to satisfy a wide range of structural and aesthetic design requirements. This is contrasted with a building built to a design that was created specifically for that building. Within some geographic industry sectors pre-engineered buildings are also called pre-engineered metal buildings (PEMB) or, as is becoming increasingly common due to the reduced amount of pre-engineering involved in custom computer-aided designs, simply engineered metal buildings (EMB).
During the 1960s, standardized engineering designs for buildings were first marketed as PEBs. Historically, the primary framing structure of a pre-engineered building is an assembly of Ɪ-shaped members, often referred to as I-beams . In pre-engineered buildings, the I beams used are usually formed by welding together steel plates to form the I section. The I beams are then field-assembled (e.g. bolted connections) to form the entire frame of the pre-engineered building. Some manufacturers taper the framing members (varying in web depth) according to the local loading effects. Larger plate dimensions are used in areas of higher load effects.
Other forms of primary framing can include trusses, mill sections rather than three-plate welded, castellated beams, etc. The choice of economic form can vary depending on factors such as local capabilities (e.g. manufacturing, transportation, construction) and variations in material vs. labour costs.
Typically, primary frames are 2D type frames (i.e. may be analyzed using two-dimensional techniques). Advances in computer-aided design technology, materials and manufacturing capabilities have assisted a growth in alternate forms of pre-engineered building such as the tension fabric building and more sophisticated analysis (e.g. three-dimensional) as is required by some building codes.
Cold formed Z- and C-shaped members may be used as secondary structural elements to fasten and support the external cladding.
Roll-formed profiled steel sheet, wood, tensioned fabric, precast concrete, masonry block, glass curtainwall or other materials may be used for the external cladding of the building.
In order to accurately design a pre-engineered building, engineers consider the clear span between bearing points, bay spacing, roof slope, live loads, dead loads, collateral loads, wind uplift, deflection criteria, internal crane system and maximum practical size and weight of fabricated members. Historically, pre-engineered building manufacturers have developed pre-calculated tables for different structural elements in order to allow designers to select the most efficient I beams size for their projects. However, the table selection procedures are becoming rare with the evolution in computer-aided custom designs.
While pre-engineered buildings can be adapted to suit a wide variety of structural applications, the greatest economy will be realized when utilising standard details. An efficiently designed pre-engineered building can be lighter than the conventional steel buildings by up to 30%. Lighter weight equates to less steel and a potential price savings in structural framework.
The project architect, sometimes called the Architect of Record, is typically responsible for aspects such as aesthetic, dimensional, occupant comfort and fire safety. When a pre-engineered building is selected for a project, the architect accepts conditions inherent in the manufacturer's product offerings for aspects such as materials, colours, structural form, dimensional modularity, etc. Despite the existence of the manufacturer's standard assembly details, the architect remains responsible to ensure that the manufacturer's product and assembly is consistent with the building code requirements (e.g. continuity of air/vapour retarders, insulation, rain screen; size and location of exits; fire rated assemblies) and occupant/owner expectations.
Many jurisdictions recognize the distinction between the project engineer, sometimes called the Engineer of Record, and the manufacturer's employee or subcontract engineer, sometimes called a specialty engineer. The principal differences between these two entities on a project are the limits of commercial obligation, professional responsibility and liability.
The structural Engineer of Record is responsible to specify the design parameters for the project (e.g. materials, loads, design standards, service limits) and to ensure that the element and assembly designs by others are consistent in the global context of the finished building.
The specialty engineer is responsible to design only those elements which the manufacturer is commercially obligated to supply (e.g. by contract) and to communicate the assembly procedures, design assumptions and responses, to the extent that the design relies on or affects work by others, to the Engineer of Record – usually described in the manufacturer's erection drawings and assembly manuals. The manufacturer produces an engineered product but does not typically provide engineering services to the project.
In the context described, the Architect and Engineer of Record are the designers of the building and bear ultimate responsibility for the performance of the completed work. A buyer should be aware of the project professional distinctions when developing the project plan.
These prefabricated structures are widely used in the residential as well as industrial sector for its unmatched characteristics.
Recent advancements in pre-engineered building systems have led to the integration of diverse structural sub-systems and accessories, enhancing both functionality and aesthetic appeal. These structures now commonly include mezzanine floors for optimised interior space, crane runway beams for industrial applications, and specialised roof platforms or catwalks for operational efficiency. Aesthetic components such as fascias, parapets, and customised canopies contribute to modern design flexibility, catering to varied architectural requirements. Furthermore, pre-engineered buildings have gained recognition for their superior cost-effectiveness and speed of construction compared to traditional methods, making them a preferred choice for both commercial and industrial projects worldwide. | https://en.wikipedia.org/wiki/Prefabricated_building |
The Preference Ranking Organization METHod for Enrichment of Evaluations and its descriptive complement geometrical analysis for interactive aid are better known as the Promethee and Gaia [ 1 ] methods.
Based on mathematics and sociology, the Promethee and Gaia method was developed at the beginning of the 1980s and has been extensively studied and refined since then.
It has particular application in decision making, and is used around the world in a wide variety of decision scenarios, in fields such as business, governmental institutions, transportation, healthcare and education.
Rather than pointing out a "right" decision, the Promethee and Gaia method helps decision makers find the alternative that best suits their goal and their understanding of the problem. It provides a comprehensive and rational framework for structuring a decision problem, identifying and quantifying its conflicts and synergies, clusters of actions, and highlight the main alternatives and the structured reasoning behind.
The basic elements of the Promethee method have been first introduced by Professor Jean-Pierre Brans (CSOO, VUB Vrije Universiteit Brussel) in 1982. [ 2 ] It was later developed and implemented by Professor Jean-Pierre Brans and Professor Bertrand Mareschal (Solvay Brussels School of Economics and Management, ULB Université Libre de Bruxelles), including extensions such as GAIA.
The descriptive approach, named Gaia, [ 3 ] allows the decision maker to visualize the main features of a decision problem: he/she is able to easily identify conflicts or synergies between criteria, to identify clusters of actions and to highlight remarkable performances.
The prescriptive approach, named Promethee, [ 4 ] provides the decision maker with both complete and partial rankings of the actions.
Promethee has successfully been used in many decision making contexts worldwide. A non-exhaustive list of scientific publications about extensions, applications and discussions related to the Promethee methods [ 5 ] was published in 2010.
While it can be used by individuals working on straightforward decisions, the Promethee & Gaia is most useful where groups of people are working on complex problems, especially those with several criteria, involving a lot of human perceptions and judgments, whose decisions have long-term impact. It has unique advantages when important elements of the decision are difficult to quantify or compare, or where collaboration among departments or team members are constrained by their different specializations or perspectives.
Decision situations to which the Promethee and Gaia can be applied include:
The applications of Promethee and Gaia to complex multi-criteria decision scenarios have numbered in the thousands, and have produced extensive results in problems involving planning, resource allocation, priority setting, and selection among alternatives. Other areas have included forecasting, talent selection, and tender analysis.
Some uses of Promethee and Gaia have become case-studies. Recently these have included:
Let A = { a 1 , . . , a n } {\displaystyle A=\{a_{1},..,a_{n}\}} be a set of n actions and let F = { f 1 , . . , f q } {\displaystyle F=\{f_{1},..,f_{q}\}} be a consistent family of q criteria. Without loss of generality, we will assume that these criteria have to be maximized.
The basic data related to such a problem can be written in a table containing n × q {\displaystyle n\times q} evaluations. Each line corresponds to an action and each column corresponds to a criterion.
At first, pairwise comparisons will be made between all the actions for each criterion:
d k ( a i , a j ) {\displaystyle d_{k}(a_{i},a_{j})} is the difference between the evaluations of two actions for criterion f k {\displaystyle f_{k}} . Of course, these differences depend on the measurement scales used and are not always easy to compare for the decision maker.
As a consequence the notion of preference function is introduced to translate the difference into a unicriterion preference degree as follows:
where P k : R → [ 0 , 1 ] {\displaystyle P_{k}:\mathbb {R} \rightarrow [0,1]} is a positive non-decreasing preference function such that P k ( 0 ) = 0 {\displaystyle P_{k}(0)=0} . Six different types of preference function are proposed in the original Promethee definition. Among them, the linear unicriterion preference function is often used in practice for quantitative criteria:
where q j {\displaystyle q_{j}} and p j {\displaystyle p_{j}} are respectively the indifference and preference thresholds. The meaning of these parameters is the following: when the difference is smaller than the indifference threshold it is considered as negligible by the decision maker. Therefore, the corresponding unicriterion preference degree is equal to zero. If the difference exceeds the preference threshold it is considered to be significant. Therefore, the unicriterion preference degree is equal to one (the maximum value). When the difference is between the two thresholds, an intermediate value is computed for the preference degree using a linear interpolation.
When a preference function has been associated to each criterion by the decision maker, all comparisons between all pairs of actions can be done for all the criteria. A multicriteria preference degree is then computed to globally compare every couple of actions:
Where w k {\displaystyle w_{k}} represents the weight of criterion f k {\displaystyle f_{k}} . It is assumed that w k ≥ 0 {\displaystyle w_{k}\geq 0} and ∑ k = 1 q w k = 1 {\displaystyle \sum _{k=1}^{q}w_{k}=1} . As a direct consequence, we have:
In order to position every action with respect to all the other actions, two scores are computed:
The positive preference flow ϕ + ( a i ) {\displaystyle \phi ^{+}(a_{i})} quantifies how a given action a i {\displaystyle a_{i}} is globally preferred to all the other actions while the negative preference flow ϕ − ( a i ) {\displaystyle \phi ^{-}(a_{i})} quantifies how a given action a i {\displaystyle a_{i}} is being globally preferred by all the other actions. An ideal action would have a positive preference flow equal to 1 and a negative preference flow equal to 0. The two preference flows induce two generally different complete rankings on the set of actions. The first one is obtained by ranking the actions according to the decreasing values of their positive flow scores. The second one is obtained by ranking the actions according to the increasing values of their negative flow scores. The Promethee I partial ranking is defined as the intersection of these two rankings. As a consequence, an action a i {\displaystyle a_{i}} will be as good as another action a j {\displaystyle a_{j}} if ϕ + ( a i ) ≥ ϕ + ( a j ) {\displaystyle \phi ^{+}(a_{i})\geq \phi ^{+}(a_{j})} and ϕ − ( a i ) ≤ ϕ − ( a j ) {\displaystyle \phi ^{-}(a_{i})\leq \phi ^{-}(a_{j})}
The positive and negative preference flows are aggregated into the net preference flow:
Direct consequences of the previous formula are:
The Promethee II complete ranking is obtained by ordering the actions according to the decreasing values of the net flow scores.
According to the definition of the multicriteria preference degree, the multicriteria net flow can be disaggregated as follows:
Where:
The unicriterion net flow, denoted ϕ k ( a i ) ∈ [ − 1 ; 1 ] {\displaystyle \phi _{k}(a_{i})\in [-1;1]} , has the same interpretation as the multicriteria net flow ϕ ( a i ) {\displaystyle \phi (a_{i})} but is limited to one single criterion. Any action a i {\displaystyle a_{i}} can be characterized by a vector ϕ → ( a i ) = [ ϕ 1 ( a i ) , … , ϕ k ( a i ) , ϕ q ( a i ) ] {\displaystyle {\vec {\phi }}(a_{i})=[\phi _{1}(a_{i}),\ldots ,\phi _{k}(a_{i}),\phi _{q}(a_{i})]} in a q {\displaystyle q} dimensional space. The GAIA plane is the principal plane obtained by applying a principal components analysis to the set of actions in this space.
Promethee I is a partial ranking of the actions. It is based on the positive and negative flows. It includes preferences, indifferences and incomparabilities (partial preorder).
Promethee II is a complete ranking of the actions. It is based on the multicriteria net flow. It includes preferences and indifferences (preorder). | https://en.wikipedia.org/wiki/Preference_ranking_organization_method_for_enrichment_evaluation |
A preference test is an experiment in which animals are allowed free access to multiple environments which differ in one or more ways. Various aspects of the animal's behaviour can be measured with respect to the alternative environments, such as latency and frequency of entry, duration of time spent, range of activities observed, or relative consumption of a goal object in the environment. These measures can be recorded either by the experimenter or by motion detecting software. [ 1 ] Strength of preference can be inferred by the magnitude of the difference in the response, but see "Advantages and disadvantages" below. Statistical testing is used to determine whether observed differences in such measures support the conclusion that preference or aversion has occurred. Prior to testing, the animals are usually given the opportunity to explore the environments to habituate and reduce the effects of novelty.
Preference tests can be used to test for preferences of only one characteristic of an environment, e.g. cage colour, or multiple characteristics e.g. a choice between hamster wheel , Habitrail tunnels or additional empty space for extended locomotion. [ 2 ]
The simplest of preference tests offers a choice between two alternatives. This can be done by putting different goal boxes at the ends of the arms of a T-shaped maze , or having a chamber divided into differing halves. A famous example of this simple method is an investigation of the preferences of chickens for different types of wire floor in battery cages . Two types of metal mesh flooring were being used in the 1950s; one type was a large, open mesh using thick wire, the other was a smaller mesh size but the wire was considerably thinner. A prestigious committee, the Brambell Committee, conducting an investigation into farm animal welfare [ 3 ] concluded the thicker mesh should be used as this was likely to be more comfortable for the chickens. However, preference tests showed that chickens preferred the thinner wire. Photographs taken from under the cages showed that the thinner mesh offered more points of contact for the feet than the thick mesh, thereby spreading the load on the hens' feet and presumably feeling more comfortable to the birds.
The number of choices that can be offered is theoretically limitless for some preference tests, e.g., light intensity, cage size, food types; however, the number is often limited by experimental practicalities, current practice (e.g., animal caging systems) or costs. Furthermore, animals usually investigate all areas of the apparatus in a behaviour called "information gathering", even those with minor preference, so the more choices that are available may dilute the data on the dominant preference(s).
Most preference tests involve no 'cost' for making a choice, so they do not indicate the strength of an animal's motivation or need to obtain the outcome of the choice. For example, if a laboratory mouse is offered three sizes of cage space it may prefer one of them, but this choice does not indicate whether the mouse 'needs' that particular space, or whether it has a relatively slight preference for it. To measure an animal's motivation toward a choice one may perform a "consumer demand test." In this sort of test, the choice involves some "cost" to the animal, such as physical effort (e.g., lever pressing, weighted door).
Preference tests have been used widely in the study of animal behaviour and motivation, e.g.: | https://en.wikipedia.org/wiki/Preference_test |
The preferential alignment is a criterion of an orientation of a molecule or atom. The preferential alignment can be related to the formation of the crystal structure of an amorphous structure. [ citation needed ]
Polymeric masses with high atomic distances can either be in an oriented or non oriented state. These higher distances (up to 1000 Å) form great regions, where the molecular chains may be preferentially oriented, something which can happen independent to the existence or not of crystallinity . [ 1 ]
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Preferential_alignment |
Preferential entailment is a non-monotonic logic based on selecting only models that are considered the most plausible. The plausibility of models is expressed by an ordering among models called a preference relation, hence the name preference entailment.
Formally, given a propositional formula F {\displaystyle F} and an ordering over propositional models ≤ {\displaystyle \leq } , preferential entailment selects only the models of F {\displaystyle F} that are minimal according to ≤ {\displaystyle \leq } . This selection leads to a non-monotonic inference relation: F ⊨ pref G {\displaystyle F\models _{\text{pref}}G} holds if and only if all minimal models of F {\displaystyle F} according to ≤ {\displaystyle \leq } are also models of G {\displaystyle G} . [ 1 ]
Circumscription can be seen as the particular case of preferential entailment when the ordering is based on containment of the sets of variables assigned to true (in the propositional case) or containment of the extensions of predicates (in the first-order logic case). [ 1 ] | https://en.wikipedia.org/wiki/Preferential_entailment |
In chemical nomenclature , a preferred IUPAC name ( PIN ) is a unique name, assigned to a chemical substance and preferred among all possible names generated by IUPAC nomenclature . The "preferred IUPAC nomenclature" provides a set of rules for choosing between multiple possibilities in situations where it is important to decide on a unique name. It is intended for use in legal and regulatory situations. [ 1 ]
Preferred IUPAC names are applicable only for organic compounds , to which the IUPAC (International Union of Pure and Applied Chemistry) has the definition as compounds which contain at least a single carbon atom but no alkali , alkaline earth or transition metals and can be named by the nomenclature of organic compounds [ 2 ] (see below ). Rules for the remaining organic and inorganic compounds are still under development. [ 3 ] The concept of PINs is defined in the introductory chapter and chapter 5 of the "Nomenclature of Organic Chemistry: IUPAC Recommendations and Preferred Names 2013" (freely accessible), [ 4 ] which replace two former publications: the "Nomenclature of Organic Chemistry" , 1979 (the Blue Book ) and "A Guide to IUPAC Nomenclature of Organic Compounds, Recommendations 1993" . The full draft version of the PIN recommendations ( "Preferred names in the nomenclature of organic compounds" , Draft of 7 October 2004) is also available. [ 5 ] [ 6 ]
A preferred IUPAC name or PIN is a name that is preferred among two or more IUPAC names. An IUPAC name is a systematic name that meets the recommended IUPAC rules. IUPAC names include retained names. A general IUPAC name is any IUPAC name that is not a "preferred IUPAC name". A retained name is a traditional or otherwise often used name, usually a trivial name , that may be used in IUPAC nomenclature. [ 7 ]
Since systematic names often are not human-readable a PIN may be a retained name. Both "PINs" and "retained names" have to be chosen (and established by IUPAC) explicitly, unlike other IUPAC names, which automatically arise from IUPAC nomenclatural rules. Thus, the PIN is sometimes the retained name (e.g., phenol and acetic acid, instead of benzenol and ethanoic acid), while in other cases, the systematic name was chosen over a very common retained name (e.g., propan-2-one, instead of acetone).
A preselected name is a preferred name chosen among two or more names for parent hydrides or other parent structures that do not contain carbon (inorganic parents). "Preselected names" are used in the nomenclature of organic compounds as the basis for PINs for organic derivatives. They are needed for derivatives of organic compounds that do not contain carbon themselves. [ 7 ] A preselected name is not necessarily a PIN in inorganic chemical nomenclature.
The systems of chemical nomenclature developed by the International Union of Pure and Applied Chemistry (IUPAC) have traditionally concentrated on ensuring that chemical names are unambiguous, that is that a name can only refer to one substance. However, a single substance can have more than one acceptable name, like toluene , which may also be correctly named as "methylbenzene" or "phenylmethane". Some alternative names remain available as "retained names" for more general contexts. For example, tetrahydrofuran remains an unambiguous and acceptable name for the common organic solvent, even if the preferred IUPAC name is "oxolane". [ 8 ]
The nomenclature goes: [ 9 ]
The following are available, but not given special preference: [ 10 ]
The number of retained non-systematic, trivial names of simple organic compounds (for example formic acid and acetic acid ) has been reduced considerably for preferred IUPAC names, although a larger set of retained names is available for general nomenclature. The traditional names of simple monosaccharides , α-amino acids and many natural products have been retained as preferred IUPAC names; in these cases the systematic names may be very complicated and virtually never used. The name for water itself is a retained IUPAC name.
In IUPAC nomenclature, all compounds containing carbon atoms are considered organic compounds. Organic nomenclature only applies to organic compounds containing elements from the Groups 13 through 17 . Organometallic compounds of the Groups 1 through 12 are not covered by organic nomenclature. [ 7 ] [ 11 ] | https://en.wikipedia.org/wiki/Preferred_IUPAC_name |
In industrial design , preferred numbers (also called preferred values or preferred series ) are standard guidelines for choosing exact product dimensions within a given set of constraints.
Product developers must choose numerous lengths, distances, diameters, volumes, and other characteristic quantities . While all of these choices are constrained by considerations of functionality, usability, compatibility, safety or cost, there usually remains considerable leeway in the exact choice for many dimensions.
Preferred numbers serve two purposes:
Preferred numbers represent preferences of simple numbers (such as 1, 2, and 5) multiplied by the powers of a convenient basis, usually 10. [ 1 ]
In 1870 Charles Renard proposed a set of preferred numbers. [ 2 ] His system was adopted in 1952 as international standard ISO 3 . [ 3 ] Renard's system divides the interval from 1 to 10 into 5, 10, 20, or 40 steps, leading to the R5, R10, R20 and R40 scales, respectively. The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence . This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. Example: 1.0, 1.6, 2.5, 4.0, 6.3
The E series is another system of preferred numbers. It consists of the E1 , E3 , E6 , E12 , E24 , E48 , E96 and E192 series . Based on some of the existing manufacturing conventions, the International Electrotechnical Commission (IEC) began work on a new international standard in 1948. [ 4 ] The first version of this IEC 63 (renamed into IEC 60063 in 2007) was released in 1952. [ 4 ]
It works similarly to the Renard series, except that it subdivides the interval from 1 to 10 into 3, 6, 12, 24, 48, 96 or 192 steps. These subdivisions ensure that when some arbitrary value is replaced with the nearest preferred number, the maximum relative error will be on the order of 40%, 20%, 10%, 5%, etc.
Use of the E series is mostly restricted to electronic parts like resistors, capacitors, inductors and Zener diodes. Commonly produced dimensions for other types of electrical components are either chosen from the Renard series instead or are defined in relevant product standards (for example wires ).
In applications for which the R5 series provides a too fine graduation, the 1–2–5 series
is sometimes used as a cruder alternative. It is effectively an E3 series rounded to one significant digit:
This series covers a decade (1:10 ratio) in three steps. Adjacent values differ by factors 2 or 2.5. Unlike the Renard series, the 1–2–5 series has not been formally adopted as an international standard . However, the Renard series R10 can be used to extend the 1–2–5 series to a finer graduation.
This series is used to define the scales for graphs and for instruments that display in a two-dimensional form with a graticule, such as oscilloscopes .
The denominations of most modern currencies , notably the euro and sterling , follow a 1–2–5 series. The United States and Canada follow the approximate 1–2–5 series 1, 5, 10, 25, 50 (cents), $1, $2, $5, $10, $20, $50, $100. The 1 ⁄ 4 – 1 ⁄ 2 –1 series (... 0.1 0.25 0.5 1 2.5 5 10 ...) is also used by currencies derived from the former Dutch gulden ( Aruban florin , Netherlands Antillean gulden , Surinamese dollar ), some Middle Eastern currencies ( Iraqi and Jordanian dinars, Lebanese pound , Syrian pound ), and the Seychellois rupee . However, newer notes introduced in Lebanon and Syria due to inflation follow the standard 1–2–5 series instead.
In the 1970s the National Bureau of Standards (NBS) defined a set of convenient numbers to ease metrication in the United States . This system of metric values was described as 1–2–5 series in reverse, with assigned preferences for those numbers which are multiples of 5, 2, and 1 (plus their powers of 10), excluding linear dimensions above 100 mm. [ 1 ]
ISO 266, Acoustics—Preferred frequencies, defines two different series of audio frequencies for use in acoustical measurements. Both series are referred to the standard reference frequency of 1000 Hz, and use the R10 Renard series from ISO 3, with one using powers of 10, and the other related to the definition of the octave as the frequency ratio 1:2. [ 5 ]
For example, a set of nominal center frequencies for use in audio tests and audio test equipment is:
When dimensioning computer components, the powers of two are frequently used as preferred numbers:
Where a finer grading is needed, additional preferred numbers are obtained by multiplying a power of two with a small odd integer:
In computer graphics , widths and heights of raster images are preferred to be multiples of 16, as many compression algorithms ( JPEG , MPEG ) divide color images into square blocks of that size. Black-and-white JPEG images are divided into 8×8 blocks. Screen resolutions often follow the same principle.
Preferred aspect ratios have also an important influence here, e.g., 2:1, 3:2, 4:3, 5:3, 5:4, 8:5, 16:9.
Standard metric paper sizes use the square root of two ( √ 2 ) as factors between neighbouring dimensions rounded to the nearest mm ( Lichtenberg series, ISO 216 ). An A4 sheet for example has an aspect ratio very close to √ 2 and an area very close to 1/16 square metre. An A5 is almost exactly half an A4, and has the same aspect ratio. The √ 2 factor also appears between the standard pen thicknesses for technical drawings in ISO 9175-1: 0.13, 0.18, 0.25, 0.35, 0.50, 0.70, 1.00, 1.40, and 2.00 mm. This way, the right pen size is available to continue a drawing that has been magnified to a different standard paper size.
In photography, aperture, exposure, and film speed generally follow powers of 2:
The aperture size controls how much light enters the camera. It is measured in f-stops : f /1.4 , f /2 , f /2.8 , f /4 , etc. Full f-stops are a square root of 2 apart. Camera lens settings are often set to gaps of successive thirds, so each f-stop is a sixth root of 2, rounded to two significant digits: 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.5, 2.8, 3.2, 3.5, 4.0, etc. The spacing is referred to as "one-third of a stop". (Rounding is not exact in the cases of f /1.2 , f /3.5 , f /5.6 , f /22 , etc.)
The film speed is a measure of the film's sensitivity to light. It is expressed as ISO values such as "ISO 100". An earlier standard, occasionally still in use, uses the term "ASA" rather than "ISO", referring to the (former) American Standards Association. Measured film speeds are rounded to the nearest preferred number from a modified Renard series including 100, 125, 160, 200, 250, 320, 400, 500, 640, 800... This is the same as the R10′ rounded Renard series , except for the use of 6.4 instead of 6.3, and for having more aggressive rounding below ISO 16. Film marketed to amateurs, however, uses a restricted series including only powers of two multiples of ISO 100: 25, 50, 100, 200, 400, 800, 1600 and 3200. Some low-end cameras can only reliably read these values from DX encoded film cartridges because they lack the extra electrical contacts that would be needed to read the complete series. Some digital cameras extend this binary series to values like 12800, 25600, etc. instead of the modified Renard values 12500, 25000, etc.
The shutter speed controls how long the camera lens is open to receive light. These are expressed as fractions of a second, roughly but not exactly based on powers of 2: 1 second, 1 ⁄ 2 , 1 ⁄ 4 , 1 ⁄ 8 , 1 ⁄ 15 , 1 ⁄ 30 , 1 ⁄ 60 , 1 ⁄ 125 , 1 ⁄ 250 , 1 ⁄ 500 , 1 ⁄ 1000 of a second.
In some countries, consumer-protection laws restrict the number of different prepackaged sizes in which certain products can be sold, in order to make it easier for consumers to compare prices.
An example of such a regulation is the European Union directive on the volume of certain prepackaged liquids (75/106/EEC [ 7 ] ). It restricts the list of allowed wine-bottle sizes to 0.1, 0.25 ( 1 ⁄ 4 ), 0.375 ( 3 ⁄ 8 ), 0.5 ( 1 ⁄ 2 ), 0.75 ( 3 ⁄ 4 ), 1, 1.5, 2, 3, and 5 litres. Similar lists exist for several other types of products. They vary and often deviate significantly from any geometric series in order to accommodate traditional sizes when feasible. Adjacent package sizes in these lists differ typically by factors 2 ⁄ 3 or 3 ⁄ 4 , in some cases even 1 ⁄ 2 , 4 ⁄ 5 , or some other ratio of two small integers. | https://en.wikipedia.org/wiki/Preferred_number |
Prefetching is a technique used in computing to improve performance by retrieving data or instructions before they are needed. By predicting what a program will request in the future, the system can load information in advance to reduced wait times . [ 1 ]
Prefetching is used in various areas of computing, including CPU architectures and operating systems . It can be implemented in both hardware and software , and it relies on detecting access patterns that suggest what data is likely to be needed soon.
Prefetching works by predicting which memory addresses or resources will be accessing and load them into faster access storage, like caches . [ 1 ]
Prefetching may be used:
Processors (CPU's) often include prefetching that attempts to reduce cache misses by loading data into cache before it is requested by the running program. This is for programs that access memory in predictable patterns, such as loops that iterate over arrays . [ 1 ]
Hardware prefetching is can be done without software involvement and can be found in most modern CPU's. For example, Intel CPU's feature a variety of prefetch that work across multiple cache levels. [ 1 ]
Prefetch instructions can be written into the code by the programmer or by the compiler. Prefetch instructions specify the memory addresses to be prefetched and the desired prefetch distance. [ 2 ]
In software, there are instructions that can be written with:
Operating systems use prefetching to reduce file and memory access latency.
Web browsers apply prefetching techniques to improve perceived performance. Common examples include:
Prefetching can significantly improve performance, but it can not always be beneficial if implemented wrong. If predictions are inaccurate, prefetching may waste bandwidth , processing time, or cause cache pollution . In systems with limited resources or highly unpredictable workloads, prefetching can degrade performance rather than improve it. [ 1 ]
Implementing both software and hardware prefetching can also lead to degraded performance because of interactions that might occur between each other from how it was implemented. [ 6 ]
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prefetching |
IP networks are divided logically into subnetworks . Computers in the same subnetwork have the same address prefix. For example, in a typical home network with legacy Internet Protocol version 4 , the network prefix would be something like 192.168.1.0/24, as expressed in CIDR notation .
With IPv4, commonly home networks use private addresses (defined in RFC 1918 ) that are non-routable on the public Internet and use address translation to convert to routable addresses when connecting to hosts outside the local network. Business networks typically had manually provisioned subnetwork prefixes. In IPv6 global addresses are used end-to-end, so even home networks may need to distribute public, routable IP addresses to hosts.
Since it would not be practical to manually provision networks at scale, in IPv6 networking, DHCPv6 prefix delegation ( RFC 3633 ; RFC 8415 § 6.3) is used to assign a network address prefix and automate configuration and provisioning of the public routable addresses for the network. In the typical case of a home network, for example, the home router uses DHCPv6 to request a network prefix from the ISP's DHCPv6 server. Once assigned, the ISP routes this network to the customer's home router and the home router starts advertising the new address space to hosts on the network, either via SLAAC or using DHCPv6.
DHCPv6 Prefix Delegation is supported by most ISPs who provide native IPv6 for consumers on fixed networks.
Prefix delegation is generally not supported on cellular networks , for example LTE or 5G . Most cellular networks route a fixed /64 prefix to the subscriber. Personal hotspots may still provide IPv6 access to hosts on the network by using a different technique called Proxy Neighbor Discovery or using the technique described in RFC 7278 . One of the reasons why cellular networks may not yet support prefix delegation is that the operators want to use prefixes they can aggregate to a single route. To solve this, RFC 6603 defines an optional mechanism and the related DHCPv6 option to allow exclusion of one specific prefix from a delegated prefix set.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prefix_delegation |
In mathematics , especially order theory , a prefix ordered set generalizes the intuitive concept of a tree by introducing the possibility of continuous progress and continuous branching. Natural prefix orders often occur when considering dynamical systems as a set of functions from time (a totally-ordered set ) to some phase space . In this case, the elements of the set are usually referred to as executions of the system.
The name prefix order stems from the prefix order on words, which is a special kind of substring relation and, because of its discrete character, a tree.
A prefix order is a binary relation "≤" over a set P which is antisymmetric , transitive , reflexive , and downward total , i.e., for all a , b , and c in P , we have that:
While between partial orders it is usual to consider order-preserving functions , the most important type of functions between prefix orders are so-called history preserving functions. Given a prefix ordered set P , a history of a point p ∈ P is the (by definition totally ordered) set p − = { q | q ≤ p }. A function f : P → Q between prefix orders P and Q is then history preserving if and only if for every p ∈ P we find f ( p −) = f ( p )−. Similarly, a future of a point p ∈ P is the (prefix ordered) set p + = { q | p ≤ q } and f is future preserving if for all p ∈ P we find f ( p +) = f ( p )+.
Every history preserving function and every future preserving function is also order preserving, but not vice versa.
In the theory of dynamical systems, history preserving maps capture the intuition that the behavior in one system is a refinement of the behavior in another. Furthermore, functions that are history and future preserving surjections capture the notion of bisimulation between systems, and thus the intuition that a given refinement is correct with respect to a specification.
The range of a history preserving function is always a prefix closed subset, where a subset S ⊆ P is prefix closed if for all s,t ∈ P with t∈S and s≤t we find s∈S .
Taking history preserving maps as morphisms in the category of prefix orders leads to a notion of product that is not the Cartesian product of the two orders since the Cartesian product is not always a prefix order. Instead, it leads to an arbitrary interleaving of the original prefix orders. The union of two prefix orders is the disjoint union , as it is with partial orders.
Any bijective history preserving function is an order isomorphism . Furthermore, if for a given prefix ordered set P we construct the set P- ≜ { p- | p∈ P} we find that this set is prefix ordered by the subset relation ⊆, and furthermore, that the function max: P- → P is an isomorphism, where max(S) returns for each set S∈P- the maximum element in terms of the order on P (i.e. max(p-) ≜ p ). | https://en.wikipedia.org/wiki/Prefix_order |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.